Merge pull request #313 from infosiftr/explicit-code-type

Use explicit-type code blocks in a lot of obvious places…
This commit is contained in:
yosifkit 2015-08-13 12:37:57 -07:00
commit fd6ae7a5b7
73 changed files with 1288 additions and 667 deletions

View File

@ -18,7 +18,7 @@ All Markdown files here are run through [tianon's fork of `markdownfmt`](https:/
Optionally: (we run this periodically, especially before pushing updated descriptions) Optionally: (we run this periodically, especially before pushing updated descriptions)
- run `./update.sh myimage` to generate `myimage/README.md` - run `./update.sh myimage` to generate `myimage/README.md`
- run `./markdownfmt.sh -l myimage` to verify whether format of your markdown files is compliant to markdownfmt. In case you see any file names, markdownfmt detected some issues, which might result in a failed build during continuous integration. - run `./markdownfmt.sh -l myimage` to verify whether format of your markdown files is compliant to `tianon/markdownfmt`. In case you see any file names, markdownfmt detected some issues, which might result in a failed build during continuous integration.
# What are all these files? # What are all these files?
@ -54,16 +54,18 @@ This file is generated using `update.sh`.
This file contains the main content of your image's long description. The basic parts you should have are a "What Is" section and a "How To" section. See the doc on [Official Repos](https://docs.docker.com/docker-hub/official_repos/#a-long-description) for more information on long description. The issues and contribution section is generated by the script but can be overridden. The following is a basic layout: This file contains the main content of your image's long description. The basic parts you should have are a "What Is" section and a "How To" section. See the doc on [Official Repos](https://docs.docker.com/docker-hub/official_repos/#a-long-description) for more information on long description. The issues and contribution section is generated by the script but can be overridden. The following is a basic layout:
# What is XYZ? ```markdown
# What is XYZ?
// about what the contained software is
// about what the contained software is
%%LOGO%%
%%LOGO%%
# How to use this image
# How to use this image
// descriptions and examples of common use cases for the image
// make use of subsections as necessary // descriptions and examples of common use cases for the image
// make use of subsections as necessary
```
## `<image name>/README-short.txt` ## `<image name>/README-short.txt`
@ -79,7 +81,9 @@ Logo for the contained software. Specifications can be found in the docs on [Off
This file should contain a link to the license for the main software in the image. Here is an example for `golang`: This file should contain a link to the license for the main software in the image. Here is an example for `golang`:
View [license information](http://golang.org/LICENSE) for the software contained in this image. ```markdown
View [license information](http://golang.org/LICENSE) for the software contained in this image.
```
## `<image name>/user-feedback.md` ## `<image name>/user-feedback.md`
@ -89,7 +93,9 @@ This file is an optional override of the default `user-feedback.md` for those re
This file is snippet that gets inserted into the user feedback section to provide and extra way to get help, like a mailing list. Here is an example from the Postgres image: This file is snippet that gets inserted into the user feedback section to provide and extra way to get help, like a mailing list. Here is an example from the Postgres image:
on the [mailing list](http://www.postgresql.org/community/lists/subscribe/) or ```markdown
on the [mailing list](http://www.postgresql.org/community/lists/subscribe/) or
```
# Issues and Contributing # Issues and Contributing

View File

@ -10,7 +10,9 @@ Documentation for Aerospike is available at [http://aerospike.com/docs](https://
The following will run `asd` with all the exposed ports forward to the host machine. The following will run `asd` with all the exposed ports forward to the host machine.
docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server ```console
$ docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
```
**NOTE** Although this is the simplest method to getting Aerospike up and running, but it is not the preferred method. To properly run the container, please specify an **custom configuration** with the **access-address** defined. **NOTE** Although this is the simplest method to getting Aerospike up and running, but it is not the preferred method. To properly run the container, please specify an **custom configuration** with the **access-address** defined.
@ -30,7 +32,9 @@ This will use tell `asd` to use the file in `/opt/aerospike/etc/aerospike.conf`,
A full example: A full example:
docker run -d -v <DIRECTORY>:/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server asd --foreground --config-file /opt/aerospike/etc/aerospike.conf ```console
$ docker run -d -v <DIRECTORY>:/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server asd --foreground --config-file /opt/aerospike/etc/aerospike.conf
```
## access-address Configuration ## access-address Configuration
@ -56,7 +60,9 @@ Where `<DIRECTORY>` is the path to a directory containing your data files.
A full example: A full example:
docker run -d -v <DIRECTORY>:/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server ```console
$ docker run -d -v <DIRECTORY>:/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
```
## Clustering ## Clustering

View File

@ -10,17 +10,21 @@
Use like you would any other base image: Use like you would any other base image:
FROM alpine:3.1 ```dockerfile
RUN apk add --update mysql-client && rm -rf /var/cache/apk/* FROM alpine:3.1
ENTRYPOINT ["mysql"] RUN apk add --update mysql-client && rm -rf /var/cache/apk/*
ENTRYPOINT ["mysql"]
```
This example has a virtual image size of only 16 MB. Compare that to our good friend Ubuntu: This example has a virtual image size of only 16 MB. Compare that to our good friend Ubuntu:
FROM ubuntu:14.04 ```dockerfile
RUN apt-get update \ FROM ubuntu:14.04
&& apt-get install -y mysql-client \ RUN apt-get update \
&& rm -rf /var/lib/apt/lists/* && apt-get install -y mysql-client \
ENTRYPOINT ["mysql"] && rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["mysql"]
```
This yields us a virtual image size of about 232 MB image. This yields us a virtual image size of about 232 MB image.

View File

@ -31,19 +31,25 @@ ArangoDB Documentation
In order to start an ArangoDB instance run In order to start an ArangoDB instance run
unix> docker run -d --name arangodb-instance arangodb ```console
$ docker run -d --name arangodb-instance arangodb
```
Will create and launch the arangodb docker instance as background process. The Identifier of the process is printed - the plain text name will be *arangodb-instance* as you stated above. By default ArangoDB listen on port 8529 for request and the image includes `EXPOST 8529`. If you link an application container it is automatically available in the linked container. See the following examples. Will create and launch the arangodb docker instance as background process. The Identifier of the process is printed - the plain text name will be *arangodb-instance* as you stated above. By default ArangoDB listen on port 8529 for request and the image includes `EXPOST 8529`. If you link an application container it is automatically available in the linked container. See the following examples.
In order to get the IP arango listens on run: In order to get the IP arango listens on run:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' arangodb-instance ```console
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' arangodb-instance
```
### Using the instance ### Using the instance
In order to use the running instance from an application, link the container In order to use the running instance from an application, link the container
unix> docker run --name my-arangodb-app --link arangodb-instance:db-link arangodb ```console
$ docker run --name my-arangodb-app --link arangodb-instance:db-link arangodb
```
This will use the instance with the name `arangodb-instance` and link it into the application container. The application container will contain environment variables This will use the instance with the name `arangodb-instance` and link it into the application container. The application container will contain environment variables
@ -59,7 +65,9 @@ These can be used to access the database.
If you want to expose the port to the outside world, run If you want to expose the port to the outside world, run
unix> docker run -p 8529:8529 -d arangodb ```console
$ docker run -p 8529:8529 -d arangodb
```
ArangoDB listen on port 8529 for request and the image includes `EXPOST 8529`. The `-p 8529:8529` exposes this port on the host. ArangoDB listen on port 8529 for request and the image includes `EXPOST 8529`. The `-p 8529:8529` exposes this port on the host.
@ -73,10 +81,12 @@ A good explanation about persistence and docker container can be found here: [Do
You can map the container's volumes to a directory on the host, so that the data is kept between runs of the container. This path `/tmp/arangodb` is in general not the correct place to store you persistent files - it is just an example! You can map the container's volumes to a directory on the host, so that the data is kept between runs of the container. This path `/tmp/arangodb` is in general not the correct place to store you persistent files - it is just an example!
unix> mkdir /tmp/arangodb ```console
unix> docker run -p 8529:8529 -d \ $ mkdir /tmp/arangodb
-v /tmp/arangodb:/var/lib/arangodb \ $ docker run -p 8529:8529 -d \
arangodb -v /tmp/arangodb:/var/lib/arangodb \
arangodb
```
This will use the `/tmp/arangodb` directory of the host as database directory for ArangoDB inside the container. This will use the `/tmp/arangodb` directory of the host as database directory for ArangoDB inside the container.
@ -86,24 +96,34 @@ The ArangoDB startup configuration is specified in the file `/etc/arangodb/arang
If `/my/custom/arangod.conf` is the path of your arangodb configuration file, you can start your `%%REPO%%` container like this: If `/my/custom/arangod.conf` is the path of your arangodb configuration file, you can start your `%%REPO%%` container like this:
docker run --name some-%%REPO%% -v /my/custom:/etc/arangodb -d %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -v /my/custom:/etc/arangodb -d %%REPO%%:tag
```
This will start a new container `some-%%REPO%%` where the ArangoDB instance uses the startup settings from your config file instead of the default one. This will start a new container `some-%%REPO%%` where the ArangoDB instance uses the startup settings from your config file instead of the default one.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it: Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it:
chcon -Rt svirt_sandbox_file_t /my/custom ```console
$ chcon -Rt svirt_sandbox_file_t /my/custom
```
### Using a data container ### Using a data container
Alternatively you can create a container holding the data. Alternatively you can create a container holding the data.
unix> docker run -d --name arangodb-persist -v /var/lib/arangodb debian:8.0 true ```console
$ docker run -d --name arangodb-persist -v /var/lib/arangodb debian:8.0 true
```
And use this data container in your ArangoDB container. And use this data container in your ArangoDB container.
unix> docker run --volumes-from arangodb-persist -p 8529:8529 arangodb ```console
$ docker run --volumes-from arangodb-persist -p 8529:8529 arangodb
```
If want to save a few bytes you can alternatively use [hello-world](https://registry.hub.docker.com/_/hello-world/), [busybox](https://registry.hub.docker.com/_/busybox/) or [alpine](https://registry.hub.docker.com/_/alpine/) for creating the volume only containers. For example: If want to save a few bytes you can alternatively use [hello-world](https://registry.hub.docker.com/_/hello-world/), [busybox](https://registry.hub.docker.com/_/busybox/) or [alpine](https://registry.hub.docker.com/_/alpine/) for creating the volume only containers. For example:
unix> docker run -d --name arangodb-persist -v /var/lib/arangodb alpine alpine ```console
$ docker run -d --name arangodb-persist -v /var/lib/arangodb alpine alpine
```

View File

@ -12,15 +12,19 @@ BusyBox combines tiny versions of many common UNIX utilities into a single small
## Run BusyBox shell ## Run BusyBox shell
docker run -it --rm busybox ```console
$ docker run -it --rm busybox
```
This will drop you into an `sh` shell to allow you to do what you want inside a BusyBox system. This will drop you into an `sh` shell to allow you to do what you want inside a BusyBox system.
## Create a `Dockerfile` for a binary ## Create a `Dockerfile` for a binary
FROM busybox ```dockerfile
COPY ./my-static-binary /my-static-binary FROM busybox
CMD ["/my-static-binary"] COPY ./my-static-binary /my-static-binary
CMD ["/my-static-binary"]
```
This `Dockerfile` will allow you to create a minimal image for your statically compiled binary. You will have to compile the binary in some other place like another container. This `Dockerfile` will allow you to create a minimal image for your statically compiled binary. You will have to compile the binary in some other place like another container.

View File

@ -12,7 +12,9 @@ Apache Cassandra is an open source distributed database management system design
Starting a Cassandra instance is simple: Starting a Cassandra instance is simple:
docker run --name some-%%REPO%% -d %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -d %%REPO%%:tag
```
... where `some-%%REPO%%` is the name you want to assign to your container and `tag` is the tag specifying the Cassandra version you want. See the list above for relevant tags. ... where `some-%%REPO%%` is the name you want to assign to your container and `tag` is the tag specifying the Cassandra version you want. See the list above for relevant tags.
@ -20,13 +22,17 @@ Starting a Cassandra instance is simple:
This image exposes the standard Cassandra ports (see the [Cassandra FAQ](https://wiki.apache.org/cassandra/FAQ#ports)), so container linking makes the Cassandra instance available to other application containers. Start your application container like this in order to link it to the Cassandra container: This image exposes the standard Cassandra ports (see the [Cassandra FAQ](https://wiki.apache.org/cassandra/FAQ#ports)), so container linking makes the Cassandra instance available to other application containers. Start your application container like this in order to link it to the Cassandra container:
docker run --name some-app --link some-%%REPO%%:%%REPO%% -d app-that-uses-cassandra ```console
$ docker run --name some-app --link some-%%REPO%%:%%REPO%% -d app-that-uses-cassandra
```
## Make a cluster ## Make a cluster
Using the environment variables documented below, there are two cluster scenarios: instances on the same machine and instances on separate machines. For the same machine, start the instance as described above. To start other instances, just tell each new node where the first is. Using the environment variables documented below, there are two cluster scenarios: instances on the same machine and instances on separate machines. For the same machine, start the instance as described above. To start other instances, just tell each new node where the first is.
docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-%%REPO%%)" %%REPO%%:tag ```console
$ docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-%%REPO%%)" %%REPO%%:tag
```
... where `some-%%REPO%%` is the name of your original Cassandra Server container, taking advantage of `docker inspect` to get the IP address of the other container. ... where `some-%%REPO%%` is the name of your original Cassandra Server container, taking advantage of `docker inspect` to get the IP address of the other container.
@ -34,21 +40,29 @@ For separate machines (ie, two VMs on a cloud provider), you need to tell Cassan
Assuming the first machine's IP address is `10.42.42.42` and the second's is `10.43.43.43`, start the first with exposed gossip port: Assuming the first machine's IP address is `10.42.42.42` and the second's is `10.43.43.43`, start the first with exposed gossip port:
docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 %%REPO%%:tag
```
Then start a Cassandra container on the second machine, with the exposed gossip port and seed pointing to the first machine: Then start a Cassandra container on the second machine, with the exposed gossip port and seed pointing to the first machine:
docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 %%REPO%%:tag
```
## Connect to Cassandra from `cqlsh` ## Connect to Cassandra from `cqlsh`
The following command starts another Cassandra container instance and runs `cqlsh` (Cassandra Query Language Shell) against your original Cassandra container, allowing you to execute CQL statements against your database instance: The following command starts another Cassandra container instance and runs `cqlsh` (Cassandra Query Language Shell) against your original Cassandra container, allowing you to execute CQL statements against your database instance:
docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"' ```console
$ docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"'
```
... or (simplified to take advantage of the `/etc/hosts` entry Docker adds for linked containers): ... or (simplified to take advantage of the `/etc/hosts` entry Docker adds for linked containers):
docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% cqlsh cassandra ```console
$ docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% cqlsh cassandra
```
... where `some-%%REPO%%` is the name of your original Cassandra Server container. ... where `some-%%REPO%%` is the name of your original Cassandra Server container.
@ -58,11 +72,15 @@ More information about the CQL can be found in the [Cassandra documentation](htt
The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container: The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container:
docker exec -it some-%%REPO%% bash ```console
$ docker exec -it some-%%REPO%% bash
```
The Cassandra Server log is available through Docker's container log: The Cassandra Server log is available through Docker's container log:
docker logs some-%%REPO%% ```console
$ docker logs some-%%REPO%%
```
## Environment Variables ## Environment Variables
@ -116,13 +134,17 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`. 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `%%REPO%%` container like this: 2. Start your `%%REPO%%` container like this:
docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/cassandra/data -d %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/cassandra/data -d %%REPO%%:tag
```
The `-v /my/own/datadir:/var/lib/cassandra/data` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/cassandra/data` inside the container, where Cassandra by default will write its data files. The `-v /my/own/datadir:/var/lib/cassandra/data` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/cassandra/data` inside the container, where Cassandra by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it: Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir ```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```
## No connections until Cassandra init completes ## No connections until Cassandra init completes

View File

@ -8,16 +8,24 @@ Celery is an open source asynchronous task queue/job queue based on distributed
## start a celery worker (RabbitMQ Broker) ## start a celery worker (RabbitMQ Broker)
docker run --link some-rabbit:rabbit --name some-celery -d celery ```console
$ docker run --link some-rabbit:rabbit --name some-celery -d celery
```
### check the status of the cluster ### check the status of the cluster
docker run --link some-rabbit:rabbit --rm celery celery status ```console
$ docker run --link some-rabbit:rabbit --rm celery celery status
```
## start a celery worker (Redis Broker) ## start a celery worker (Redis Broker)
docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --name some-celery -d celery ```console
$ docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --name some-celery -d celery
```
### check the status of the cluster ### check the status of the cluster
docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --rm celery celery status ```console
$ docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --rm celery celery status
```

View File

@ -30,44 +30,54 @@ Currently, systemd in CentOS 7 has been removed and replaced with a `fakesystemd
## Dockerfile for systemd base image ## Dockerfile for systemd base image
FROM centos:7 ```dockerfile
MAINTAINER "you" <your@email.here> FROM centos:7
ENV container docker MAINTAINER "you" <your@email.here>
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs ENV container docker
RUN yum -y update; yum clean all; \ RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == RUN yum -y update; yum clean all; \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \ (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i ==
rm -f /lib/systemd/system/multi-user.target.wants/*;\ systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /etc/systemd/system/*.wants/*;\ rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \ rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \ rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \ rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/basic.target.wants/*;\ rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/anaconda.target.wants/*; rm -f /lib/systemd/system/basic.target.wants/*;\
VOLUME [ "/sys/fs/cgroup" ] rm -f /lib/systemd/system/anaconda.target.wants/*;
CMD ["/usr/sbin/init"] VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
```
This Dockerfile swaps out fakesystemd for the real package, but deletes a number of unit files which might cause issues. From here, you are ready to build your base image. This Dockerfile swaps out fakesystemd for the real package, but deletes a number of unit files which might cause issues. From here, you are ready to build your base image.
docker build --rm -t local/c7-systemd . ```console
$ docker build --rm -t local/c7-systemd .
```
## Example systemd enabled app container ## Example systemd enabled app container
In order to use the systemd enabled base container created above, you will need to create your `Dockerfile` similar to the one below. In order to use the systemd enabled base container created above, you will need to create your `Dockerfile` similar to the one below.
FROM local/c7-systemd ```dockerfile
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service FROM local/c7-systemd
EXPOSE 80 RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
CMD ["/usr/sbin/init"] EXPOSE 80
CMD ["/usr/sbin/init"]
```
Build this image: Build this image:
docker build --rm -t local/c7-systemd-httpd ```console
$ docker build --rm -t local/c7-systemd-httpd
```
## Running a systemd enabled app container ## Running a systemd enabled app container
In order to run a container with systemd, you will need to use the `--privileged` option mentioned earlier, as well as mounting the cgroups volumes from the host. Below is an example command that will run the systemd enabled httpd container created earlier. In order to run a container with systemd, you will need to use the `--privileged` option mentioned earlier, as well as mounting the cgroups volumes from the host. Below is an example command that will run the systemd enabled httpd container created earlier.
docker run --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd ```console
$ docker run --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
```
This container is running with systemd in a limited context, but it must always be run as a privileged container with the cgroups filesystem mounted. This container is running with systemd in a limited context, but it must always be run as a privileged container with the cgroups filesystem mounted.

View File

@ -12,26 +12,32 @@ Clojure is a dialect of the Lisp programming language. It is a general-purpose p
Since the most common way to use Clojure is in conjunction with [Leiningen (`lein`)](http://leiningen.org/), this image assumes that's how you'll be working. The most straightforward way to use this image is to add a `Dockerfile` to an existing Leiningen/Clojure project: Since the most common way to use Clojure is in conjunction with [Leiningen (`lein`)](http://leiningen.org/), this image assumes that's how you'll be working. The most straightforward way to use this image is to add a `Dockerfile` to an existing Leiningen/Clojure project:
FROM clojure ```dockerfile
COPY . /usr/src/app FROM clojure
WORKDIR /usr/src/app COPY . /usr/src/app
CMD ["lein", "run"] WORKDIR /usr/src/app
CMD ["lein", "run"]
```
Then, run these commands to build and run the image: Then, run these commands to build and run the image:
docker build -t my-clojure-app . ```console
docker run -it --rm --name my-running-app my-clojure-app $ docker build -t my-clojure-app .
$ docker run -it --rm --name my-running-app my-clojure-app
```
While the above is the most straightforward example of a `Dockerfile`, it does have some drawbacks. The `lein run` command will download your dependencies, compile the project, and then run it. That's a lot of work, all of which you may not want done every time you run the image. To get around this, you can download the dependencies and compile the project ahead of time. This will significantly reduce startup time when you run your image. While the above is the most straightforward example of a `Dockerfile`, it does have some drawbacks. The `lein run` command will download your dependencies, compile the project, and then run it. That's a lot of work, all of which you may not want done every time you run the image. To get around this, you can download the dependencies and compile the project ahead of time. This will significantly reduce startup time when you run your image.
FROM clojure ```dockerfile
RUN mkdir -p /usr/src/app FROM clojure
WORKDIR /usr/src/app RUN mkdir -p /usr/src/app
COPY project.clj /usr/src/app/ WORKDIR /usr/src/app
RUN lein deps COPY project.clj /usr/src/app/
COPY . /usr/src/app RUN lein deps
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar COPY . /usr/src/app
CMD ["java", "-jar", "app-standalone.jar"] RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar
CMD ["java", "-jar", "app-standalone.jar"]
```
Writing the `Dockerfile` this way will download the dependencies (and cache them, so they are only re-downloaded when the dependencies change) and then compile them into a standalone jar ahead of time rather than each time the image is run. Writing the `Dockerfile` this way will download the dependencies (and cache them, so they are only re-downloaded when the dependencies change) and then compile them into a standalone jar ahead of time rather than each time the image is run.
@ -41,6 +47,8 @@ You can then build and run the image as above.
If you have an existing Lein/Clojure project, it's fairly straightforward to compile your project into a jar from a container: If you have an existing Lein/Clojure project, it's fairly straightforward to compile your project into a jar from a container:
docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar ```console
$ docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar
```
This will build your project into a jar file located in your project's `target/uberjar` directory. This will build your project into a jar file located in your project's `target/uberjar` directory.

View File

@ -10,7 +10,9 @@ For support, please visit the [Couchbase support forum](https://forums.couchbase
# How to use this image: QuickStart # How to use this image: QuickStart
docker run -d -p 8091:8091 couchbase ```console
$ docker run -d -p 8091:8091 couchbase
```
At this point go to http://localhost:8091 from the host machine to see the Admin Console web UI. More details and screenshots are given below in the **Single host, single container** section. At this point go to http://localhost:8091 from the host machine to see the Admin Console web UI. More details and screenshots are given below in the **Single host, single container** section.
@ -26,9 +28,11 @@ There are several deployment scenarios which this Docker image can easily suppor
A Couchbase Server Docker container will write all persistent and node-specific data under the directory `/opt/couchbase/var`. As this directory is declared to be a Docker volume, it will be persisted outside the normal union filesystem. This results in improved performance. It also allows you to easily migrate to a container running an updated point release of Couchbase Server without losing your data with a process like this: A Couchbase Server Docker container will write all persistent and node-specific data under the directory `/opt/couchbase/var`. As this directory is declared to be a Docker volume, it will be persisted outside the normal union filesystem. This results in improved performance. It also allows you to easily migrate to a container running an updated point release of Couchbase Server without losing your data with a process like this:
docker stop my-couchbase-container ```console
docker run -d --name my-new-couchbase-container --volumes-from my-couchbase-container .... $ docker stop my-couchbase-container
docker rm my-couchbase-container $ docker run -d --name my-new-couchbase-container --volumes-from my-couchbase-container ....
$ docker rm my-couchbase-container
```
By default, the persisted location of the volume on your Docker host will be hidden away in a location managed by the Docker daemon. In order to control its location - in particular, to ensure that it is on a partition with sufficient disk space for your server - we recommend mapping the volume to a specific directory on the host filesystem using the `-v` option to `docker run`. By default, the persisted location of the volume on your Docker host will be hidden away in a location managed by the Docker daemon. In order to control its location - in particular, to ensure that it is on a partition with sufficient disk space for your server - we recommend mapping the volume to a specific directory on the host filesystem using the `-v` option to `docker run`.
@ -38,21 +42,27 @@ All of the example commands below will assume you are using volumes mapped to ho
If you have SELinux enabled, mounting host volumes in a container requires an extra step. Assuming you are mounting the `~/couchbase` directory on the host filesystem, you will need to run the following command once before running your first container on that host: If you have SELinux enabled, mounting host volumes in a container requires an extra step. Assuming you are mounting the `~/couchbase` directory on the host filesystem, you will need to run the following command once before running your first container on that host:
mkdir ~/couchbase && chcon -Rt svirt_sandbox_file_t ~/couchbase ```console
$ mkdir ~/couchbase && chcon -Rt svirt_sandbox_file_t ~/couchbase
```
## Ulimits ## Ulimits
Couchbase normally expects the following changes to ulimits: Couchbase normally expects the following changes to ulimits:
ulimit -n 40960 # nofile: max number of open files ```console
ulimit -c unlimited # core: max core file size $ ulimit -n 40960 # nofile: max number of open files
ulimit -l unlimited # memlock: maximum locked-in-memory address space $ ulimit -c unlimited # core: max core file size
$ ulimit -l unlimited # memlock: maximum locked-in-memory address space
```
These ulimit settings are necessary when running under heavy load; but if you are just doing light testing and development, you can omit these settings and everything will still work. These ulimit settings are necessary when running under heavy load; but if you are just doing light testing and development, you can omit these settings and everything will still work.
In order to set the ulimits in your container, you will need to run Couchbase Docker containers with the following additional `--ulimit` flags: In order to set the ulimits in your container, you will need to run Couchbase Docker containers with the following additional `--ulimit` flags:
docker run -d --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 couchbase ```console
$ docker run -d --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 couchbase
```
Since `unlimited` is not supported as a value, it sets the core and memlock values to 100 GB. If your system has more than 100 GB RAM, you will want to increase this value to match the available RAM on the system. Since `unlimited` is not supported as a value, it sets the core and memlock values to 100 GB. If your system has more than 100 GB RAM, you will want to increase this value to match the available RAM on the system.
@ -78,16 +88,20 @@ This is a quick way to try out Couchbase Server on your own machine with no inst
**Start the container** **Start the container**
docker run -d -v ~/couchbase:/opt/couchbase/var -p 8091:8091 --name my-couchbase-server couchbase ```console
$ docker run -d -v ~/couchbase:/opt/couchbase/var -p 8091:8091 --name my-couchbase-server couchbase
```
We use the --name option to make it easier to refer to this running container in future. We use the `--name` option to make it easier to refer to this running container in future.
**Verify container start** **Verify container start**
Use the container name you specified (eg. `my-couchbase-server`) to view the logs: Use the container name you specified (eg. `my-couchbase-server`) to view the logs:
$ docker logs my-couchbase-server ```console
Starting Couchbase Server -- Web UI available at http://<ip>:8091 $ docker logs my-couchbase-server
Starting Couchbase Server -- Web UI available at http://<ip>:8091
```
**Connect to the Admin Console** **Connect to the Admin Console**
@ -135,9 +149,11 @@ You should run the SDK on the host and point it to `http://localhost:8091/pools`
You can choose to mount `/opt/couchbase/var` from the host, however you *must give each container a separate host directory*. You can choose to mount `/opt/couchbase/var` from the host, however you *must give each container a separate host directory*.
docker run -d -v ~/couchbase/node1:/opt/couchbase/var couchbase ```console
docker run -d -v ~/couchbase/node2:/opt/couchbase/var couchbase $ docker run -d -v ~/couchbase/node1:/opt/couchbase/var couchbase
docker run -d -v ~/couchbase/node3:/opt/couchbase/var -p 8091:8091 couchbase $ docker run -d -v ~/couchbase/node2:/opt/couchbase/var couchbase
$ docker run -d -v ~/couchbase/node3:/opt/couchbase/var -p 8091:8091 couchbase
```
**Setting up your Couchbase cluster** **Setting up your Couchbase cluster**
@ -191,7 +207,9 @@ Using the `--net=host` flag will have the following effects:
Start a container on *each host* via: Start a container on *each host* via:
docker run -d -v ~/couchbase:/opt/couchbase/var --net=host couchbase ```console
$ docker run -d -v ~/couchbase:/opt/couchbase/var --net=host couchbase
```
To configure Couchbase Server: To configure Couchbase Server:

View File

@ -8,19 +8,27 @@ Crate is an Elastic SQL Data Store. Distributed by design, Crate makes centraliz
## How to use this image ## How to use this image
docker run -d -p 4200:4200 -p 4300:4300 crate:latest ```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate:latest
```
### Attach persistent data directory ### Attach persistent data directory
docker run -d -p 4200:4200 -p 4300:4300 -v <data-dir>:/data crate ```console
$ docker run -d -p 4200:4200 -p 4300:4300 -v <data-dir>:/data crate
```
### Use custom Crate configuration ### Use custom Crate configuration
docker run -d -p 4200:4200 -p 4300:4300 crate -Des.config=/path/to/crate.yml ```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate -Des.config=/path/to/crate.yml
```
Any configuration settings may be specified upon startup using the `-D` option prefix. For example, configuring the cluster name by using system properties will work this way: Any configuration settings may be specified upon startup using the `-D` option prefix. For example, configuring the cluster name by using system properties will work this way:
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.cluster.name=cluster ```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.cluster.name=cluster
```
For further configuration options please refer to the [Configuration](https://crate.io/docs/stable/configuration.html) section of the online documentation. For further configuration options please refer to the [Configuration](https://crate.io/docs/stable/configuration.html) section of the online documentation.
@ -30,7 +38,9 @@ To set environment variables for Crate Data you need to use the `--env` option w
For example, setting the heap size: For example, setting the heap size:
docker run -d -p 4200:4200 -p 4300:4300 --env CRATE_HEAP_SIZE=32g crate ```console
$ docker run -d -p 4200:4200 -p 4300:4300 --env CRATE_HEAP_SIZE=32g crate
```
## Multicast ## Multicast
@ -40,27 +50,33 @@ You can enable unicast in your custom `crate.yml`. See also: [Crate Multi Node S
Due to its architecture, Crate publishes the host it runs on for discovery within the cluster. Since the address of the host inside the docker container differs from the actual host the docker image is running on, you need to tell Crate to publish the address of the docker host for discovery. Due to its architecture, Crate publishes the host it runs on for discovery within the cluster. Since the address of the host inside the docker container differs from the actual host the docker image is running on, you need to tell Crate to publish the address of the docker host for discovery.
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.network.publish_host=host1.example.com: ```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.network.publish_host=host1.example.com:
```
If you change the transport port from the default `4300` to something else, you also need to pass the publish port to Crate. If you change the transport port from the default `4300` to something else, you also need to pass the publish port to Crate.
docker run -d -p 4200:4200 -p 4321:4300 crate crate -Des.transport.publish_port=4321 ```console
$ docker run -d -p 4200:4200 -p 4321:4300 crate crate -Des.transport.publish_port=4321
```
### Example Usage in a Multinode Setup ### Example Usage in a Multinode Setup
HOSTS='crate1.example.com:4300,crate2.example.com:4300,crate3.example.com:4300' ```console
HOST=crate1.example.com $ HOSTS='crate1.example.com:4300,crate2.example.com:4300,crate3.example.com:4300'
docker run -d \ $ HOST=crate1.example.com
-p 4200:4200 \ $ docker run -d \
-p 4300:4300 \ -p 4200:4200 \
--name node1 \ -p 4300:4300 \
--volume /mnt/data:/data \ --name node1 \
--env CRATE_HEAP_SIZE=8g \ --volume /mnt/data:/data \
crate:latest \ --env CRATE_HEAP_SIZE=8g \
crate -Des.cluster.name=cratecluster \ crate:latest \
-Des.node.name=crate1 \ crate -Des.cluster.name=cratecluster \
-Des.transport.publish_port=4300 \ -Des.node.name=crate1 \
-Des.network.publish_host=$HOST \ -Des.transport.publish_port=4300 \
-Des.multicast.enabled=false \ -Des.network.publish_host=$HOST \
-Des.discovery.zen.ping.unicast.hosts=$HOSTS \ -Des.multicast.enabled=false \
-Des.discovery.zen.minimum_master_nodes=2 -Des.discovery.zen.ping.unicast.hosts=$HOSTS \
-Des.discovery.zen.minimum_master_nodes=2
```

10
debian/content.md vendored
View File

@ -17,7 +17,9 @@ http://httpredir.debian.org/debian testing main`).
The mirror of choice for these images is [httpredir.debian.org](http://httpredir.debian.org) so that it's as close to optimal as possible, regardless of location or connection. The mirror of choice for these images is [httpredir.debian.org](http://httpredir.debian.org) so that it's as close to optimal as possible, regardless of location or connection.
$ docker run debian:jessie cat /etc/apt/sources.list ```console
deb http://httpredir.debian.org/debian jessie main $ docker run debian:jessie cat /etc/apt/sources.list
deb http://httpredir.debian.org/debian jessie-updates main deb http://httpredir.debian.org/debian jessie main
deb http://security.debian.org jessie/updates main deb http://httpredir.debian.org/debian jessie-updates main
deb http://security.debian.org jessie/updates main
```

View File

@ -10,7 +10,9 @@ Django is a free and open source web application framework, written in Python, w
## Create a `Dockerfile` in your Django app project ## Create a `Dockerfile` in your Django app project
FROM django:onbuild ```dockerfile
FROM django:onbuild
```
Put this file in the root of your app, next to the `requirements.txt`. Put this file in the root of your app, next to the `requirements.txt`.
@ -18,23 +20,31 @@ This image includes multiple `ONBUILD` triggers which should cover most applicat
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-django-app . ```console
docker run --name some-django-app -d my-django-app $ docker build -t my-django-app .
$ docker run --name some-django-app -d my-django-app
```
You can test it by visiting `http://container-ip:8000` in a browser or, if you need access outside the host, on `http://localhost:8000` with the following command: You can test it by visiting `http://container-ip:8000` in a browser or, if you need access outside the host, on `http://localhost:8000` with the following command:
docker run --name some-django-app -p 8000:8000 -d my-django-app ```console
$ docker run --name some-django-app -p 8000:8000 -d my-django-app
```
## Without a `Dockerfile` ## Without a `Dockerfile`
Of course, if you don't want to take advantage of magical and convenient `ONBUILD` triggers, you can always just use `docker run` directly to avoid having to add a `Dockerfile` to your project. Of course, if you don't want to take advantage of magical and convenient `ONBUILD` triggers, you can always just use `docker run` directly to avoid having to add a `Dockerfile` to your project.
docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000 -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000" ```console
$ docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000 -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
```
## Bootstrap a new Django Application ## Bootstrap a new Django Application
If you want to generate the scaffolding for a new Django project, you can do the following: If you want to generate the scaffolding for a new Django project, you can do the following:
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app django django-admin.py startproject mysite ```console
$ docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app django django-admin.py startproject mysite
```
This will create a sub-directory named `mysite` inside your current directory. This will create a sub-directory named `mysite` inside your current directory.

View File

@ -10,11 +10,15 @@ Drupal is a free and open-source content-management framework written in PHP and
The basic pattern for starting a `%%REPO%%` instance is: The basic pattern for starting a `%%REPO%%` instance is:
docker run --name some-%%REPO%% -d %%REPO%% ```console
$ docker run --name some-%%REPO%% -d %%REPO%%
```
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-%%REPO%% -p 8080:80 -d %%REPO%% ```console
$ docker run --name some-%%REPO%% -p 8080:80 -d %%REPO%%
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser. Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
@ -24,7 +28,9 @@ When first accessing the webserver provided by this image, it will go through a
## MySQL ## MySQL
docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%% ```console
$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
```
- Database type: `MySQL, MariaDB, or equivalent` - Database type: `MySQL, MariaDB, or equivalent`
- Database name/username/password: `<details for accessing your MySQL instance>` (`MYSQL_USER`, `MYSQL_PASSWORD`, `MYSQL_DATABASE`; see environment variables in the description for [`mysql`](https://registry.hub.docker.com/_/mysql/)) - Database name/username/password: `<details for accessing your MySQL instance>` (`MYSQL_USER`, `MYSQL_PASSWORD`, `MYSQL_DATABASE`; see environment variables in the description for [`mysql`](https://registry.hub.docker.com/_/mysql/))
@ -32,7 +38,9 @@ When first accessing the webserver provided by this image, it will go through a
## PostgreSQL ## PostgreSQL
docker run --name some-%%REPO%% --link some-postgres:postgres -d %%REPO%% ```console
$ docker run --name some-%%REPO%% --link some-postgres:postgres -d %%REPO%%
```
- Database type: `PostgreSQL` - Database type: `PostgreSQL`
- Database name/username/password: `<details for accessing your PostgreSQL instance>` (`POSTGRES_USER`, `POSTGRES_PASSWORD`; see environment variables in the description for [`postgres`](https://registry.hub.docker.com/_/postgres/)) - Database name/username/password: `<details for accessing your PostgreSQL instance>` (`POSTGRES_USER`, `POSTGRES_PASSWORD`; see environment variables in the description for [`postgres`](https://registry.hub.docker.com/_/postgres/))

View File

@ -12,18 +12,26 @@ Elasticsearch is a registered trademark of Elasticsearch BV.
You can run the default `elasticsearch` command simply: You can run the default `elasticsearch` command simply:
docker run -d elasticsearch ```console
$ docker run -d elasticsearch
```
You can also pass in additional flags to `elasticsearch`: You can also pass in additional flags to `elasticsearch`:
docker run -d elasticsearch elasticsearch -Des.node.name="TestNode" ```console
$ docker run -d elasticsearch elasticsearch -Des.node.name="TestNode"
```
This image comes with a default set of configuration files for `elasticsearch`, but if you want to provide your own set of configuration files, you can do so via a volume mounted at `/usr/share/elasticsearch/config`: This image comes with a default set of configuration files for `elasticsearch`, but if you want to provide your own set of configuration files, you can do so via a volume mounted at `/usr/share/elasticsearch/config`:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch ```console
$ docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
```
This image is configured with a volume at `/usr/share/elasticsearch/data` to hold the persisted index data. Use that path if you would like to keep the data in a mounted volume: This image is configured with a volume at `/usr/share/elasticsearch/data` to hold the persisted index data. Use that path if you would like to keep the data in a mounted volume:
docker run -d -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch ```console
$ docker run -d -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch
```
This image includes `EXPOSE 9200 9300` ([default `http.port`](http://www.elastic.co/guide/en/elasticsearch/reference/1.5/modules-http.html)), so standard container linking will make it automatically available to the linked containers. This image includes `EXPOSE 9200 9300` ([default `http.port`](http://www.elastic.co/guide/en/elasticsearch/reference/1.5/modules-http.html)), so standard container linking will make it automatically available to the linked containers.

View File

@ -10,7 +10,9 @@ Fedora rawhide is available via `fedora:rawhide` and Fedora 20 via `fedora:20` a
The metalink `http://mirrors.fedoraproject.org` is used to automatically select a mirror site (both for building the image as well as for the yum repos in the container image). The metalink `http://mirrors.fedoraproject.org` is used to automatically select a mirror site (both for building the image as well as for the yum repos in the container image).
$ docker run fedora cat /etc/yum.repos.d/fedora.repo | grep metalink ```console
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch $ docker run fedora cat /etc/yum.repos.d/fedora.repo | grep metalink
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
```

View File

@ -10,14 +10,18 @@ Robot simulation is an essential tool in every roboticist's toolbox. A well-desi
## Create a `Dockerfile` in your Gazebo project ## Create a `Dockerfile` in your Gazebo project
FROM gazebo:gzserver5 ```dockerfile
# place here your application's setup specifics FROM gazebo:gzserver5
CMD [ "gzserver", "my-gazebo-app-args" ] # place here your application's setup specifics
CMD [ "gzserver", "my-gazebo-app-args" ]
```
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-gazebo-app . ```console
docker run -it -v="/tmp/.gazebo/:/root/.gazebo/" --name my-running-app my-gazebo-app $ docker build -t my-gazebo-app .
$ docker run -it -v="/tmp/.gazebo/:/root/.gazebo/" --name my-running-app my-gazebo-app
```
## Deployment use cases ## Deployment use cases
@ -37,7 +41,9 @@ Gazebo uses the `~/.gazebo/` directory for storing logs, models and scene info.
For example, if one wishes to use their own `.gazebo` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument: For example, if one wishes to use their own `.gazebo` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument:
docker run -v "/home/ubuntu/.gazebo/:/root/.gazebo/" gazebo ```console
$ docker run -v "/home/ubuntu/.gazebo/:/root/.gazebo/" gazebo
```
One thing to be careful about is that gzserver logs to files named `/root/.gazebo/server-<port>/*.log`, where `<port>` is the port number that server is listening on (11345 by default). If you run and mount multiple containers using the same default port and same host side directory, then they will collide and attempt writing to the same file. If you want to run multiple gzservers on the same docker host, then a bit more clever volume mounting of `~/.gazebo/` subfolders would be required. One thing to be careful about is that gzserver logs to files named `/root/.gazebo/server-<port>/*.log`, where `<port>` is the port number that server is listening on (11345 by default). If you run and mount multiple containers using the same default port and same host side directory, then they will collide and attempt writing to the same file. If you want to run multiple gzservers on the same docker host, then a bit more clever volume mounting of `~/.gazebo/` subfolders would be required.
@ -55,49 +61,67 @@ In this short example, we'll spin up a new container running gazebo server, conn
> First launch a gazebo server with a mounted volume for logging and name the container gazebo: > First launch a gazebo server with a mounted volume for logging and name the container gazebo:
docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo gazebo ```console
$ docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo gazebo
```
> Now open a new bash session in the container using the same entrypoint to configure the environment. Then download the double_pendulum model and load it into the simulation. > Now open a new bash session in the container using the same entrypoint to configure the environment. Then download the double_pendulum model and load it into the simulation.
docker exec -it gazebo bash ```console
curl -o double_pendulum.sdf http://models.gazebosim.org/double_pendulum_with_base/model-1_4.sdf $ docker exec -it gazebo bash
gz model --model-name double_pendulum --spawn-file double_pendulum.sdf $ curl -o double_pendulum.sdf http://models.gazebosim.org/double_pendulum_with_base/model-1_4.sdf
$ gz model --model-name double_pendulum --spawn-file double_pendulum.sdf
```
> To start recording the running simulation, simply use [`gz log`](http://www.gazebosim.org/tutorials?tut=log_filtering&cat=tools_utilities) to do so. > To start recording the running simulation, simply use [`gz log`](http://www.gazebosim.org/tutorials?tut=log_filtering&cat=tools_utilities) to do so.
gz log --record 1 ```console
$ gz log --record 1
```
> After a few seconds, go ahead and stop recording by disabling the same flag. > After a few seconds, go ahead and stop recording by disabling the same flag.
gz log --record 0 ```console
$ gz log --record 0
```
> To introspect our logged recording, we can navigate to log directory and use `gz log` to open and examine the motion and joint state of the pendulum. This will allow you to step through the poses of the pendulum links. > To introspect our logged recording, we can navigate to log directory and use `gz log` to open and examine the motion and joint state of the pendulum. This will allow you to step through the poses of the pendulum links.
cd ~/.gazebo/log/*/gzserver/ ```console
gz log --step --hz 10 --filter *.pose/*.pose --file state.log $ cd ~/.gazebo/log/*/gzserver/
$ gz log --step --hz 10 --filter *.pose/*.pose --file state.log
```
> If you have an equivalent release of Gazebo installed locally, you can connect to the gzserver inside the container using gzclient GUI by setting the address of the master URI to the containers public address. > If you have an equivalent release of Gazebo installed locally, you can connect to the gzserver inside the container using gzclient GUI by setting the address of the master URI to the containers public address.
export GAZEBO_MASTER_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' gazebo) ```console
export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345 $ export GAZEBO_MASTER_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' gazebo)
gzclient --verbose $ export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345
$ gzclient --verbose
```
> In the rendered OpenGL view with gzclient you should see the moving double pendulum created prior still oscillating. From here you can control or monitor state of the simulation using the graphical interface, add more pendulums, reset the world, make more logs, etc. To quit the simulation, close the gzclient window and stop the container. > In the rendered OpenGL view with gzclient you should see the moving double pendulum created prior still oscillating. From here you can control or monitor state of the simulation using the graphical interface, add more pendulums, reset the world, make more logs, etc. To quit the simulation, close the gzclient window and stop the container.
docker stop gazebo ```console
docker rm gazebo $ docker stop gazebo
$ docker rm gazebo
```
> Even though our old gazebo container has been removed, we can still see that our record log has been preserved in the host volume directory. > Even though our old gazebo container has been removed, we can still see that our record log has been preserved in the host volume directory.
cd /tmp/.gazebo/log/ ```console
ls $ cd /tmp/.gazebo/log/
$ ls
```
> Again, if you have an equivalent release of Gazebo installed on your host system, you can play back the simulation with gazebo by using the recorded log file. > Again, if you have an equivalent release of Gazebo installed on your host system, you can play back the simulation with gazebo by using the recorded log file.
export GAZEBO_MASTER_IP=127.0.0.1 ```console
export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345 $ export GAZEBO_MASTER_IP=127.0.0.1
cd /tmp/.gazebo/log/*/gzserver/ $ export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345
gazebo --verbose --play state.log $ cd /tmp/.gazebo/log/*/gzserver/
$ gazebo --verbose --play state.log
```
# More Resources # More Resources

View File

@ -12,23 +12,31 @@ The GNU Compiler Collection (GCC) is a compiler system produced by the GNU Proje
The most straightforward way to use this image is to use a gcc container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project: The most straightforward way to use this image is to use a gcc container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM gcc:4.9 ```dockerfile
COPY . /usr/src/myapp FROM gcc:4.9
WORKDIR /usr/src/myapp COPY . /usr/src/myapp
RUN gcc -o myapp main.c WORKDIR /usr/src/myapp
CMD ["./myapp"] RUN gcc -o myapp main.c
CMD ["./myapp"]
```
Then, build and run the Docker image: Then, build and run the Docker image:
docker build -t my-gcc-app . ```console
docker run -it --rm --name my-running-app my-gcc-app $ docker build -t my-gcc-app .
$ docker run -it --rm --name my-running-app my-gcc-app
```
## Compile your app inside the Docker container ## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like: There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c ```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c
```
This will add your current directory, as a volume, to the container, set the working directory to the volume, and run the command `gcc -o myapp myapp.c.` This tells gcc to compile the code in `myapp.c` and output the executable to myapp. Alternatively, if you have a `Makefile`, you can instead run the `make` command inside your container: This will add your current directory, as a volume, to the container, set the working directory to the volume, and run the command `gcc -o myapp myapp.c.` This tells gcc to compile the code in `myapp.c` and output the executable to myapp. Alternatively, if you have a `Makefile`, you can instead run the `make` command inside your container:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make ```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make
```

View File

@ -8,20 +8,28 @@ Ghost is a free and open source blogging platform written in JavaScript and dist
# How to use this image # How to use this image
docker run --name some-ghost -d ghost ```console
$ docker run --name some-ghost -d ghost
```
This will start a Ghost instance listening on the default Ghost port of 2368. This will start a Ghost instance listening on the default Ghost port of 2368.
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-ghost -p 8080:2368 -d ghost ```console
$ docker run --name some-ghost -p 8080:2368 -d ghost
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser. Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
You can also point the image to your existing content on your host: You can also point the image to your existing content on your host:
docker run --name some-ghost -v /path/to/ghost/blog:/var/lib/ghost ghost ```console
$ docker run --name some-ghost -v /path/to/ghost/blog:/var/lib/ghost ghost
```
Alternatively you can use a [data container](http://docs.docker.com/userguide/dockervolumes/) that has a volume that points to `/var/lib/ghost` and then reference it: Alternatively you can use a [data container](http://docs.docker.com/userguide/dockervolumes/) that has a volume that points to `/var/lib/ghost` and then reference it:
docker run --name some-ghost --volumes-from some-ghost-data ghost ```console
$ docker run --name some-ghost --volumes-from some-ghost-data ghost
```

View File

@ -12,7 +12,9 @@ Go (a.k.a., Golang) is a programming language first developed at Google. It is a
The most straightforward way to use this image is to use a Go container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project: The most straightforward way to use this image is to use a Go container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM golang:1.3-onbuild ```dockerfile
FROM golang:1.3-onbuild
```
This image includes multiple `ONBUILD` triggers which should cover most applications. The build will `COPY . /usr/src/app`, `RUN go get -d -v`, and `RUN This image includes multiple `ONBUILD` triggers which should cover most applications. The build will `COPY . /usr/src/app`, `RUN go get -d -v`, and `RUN
go install -v`. go install -v`.
@ -21,30 +23,40 @@ This image also includes the `CMD ["app"]` instruction which is the default comm
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-golang-app . ```console
docker run -it --rm --name my-running-app my-golang-app $ docker build -t my-golang-app .
$ docker run -it --rm --name my-running-app my-golang-app
```
## Compile your app inside the Docker container ## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like: There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 go build -v ```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 go build -v
```
This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `go build` which will tell go to compile the project in the working directory and output the executable to `myapp`. Alternatively, if you have a `Makefile`, you can run the `make` command inside your container. This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `go build` which will tell go to compile the project in the working directory and output the executable to `myapp`. Alternatively, if you have a `Makefile`, you can run the `make` command inside your container.
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 make ```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 make
```
## Cross-compile your app inside the Docker container ## Cross-compile your app inside the Docker container
If you need to compile your application for a platform other than `linux/amd64` (such as `windows/386`), this can be easily accomplished with the provided `cross` tags: If you need to compile your application for a platform other than `linux/amd64` (such as `windows/386`), this can be easily accomplished with the provided `cross` tags:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp -e GOOS=windows -e GOARCH=386 golang:1.3-cross go build -v ```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp -e GOOS=windows -e GOARCH=386 golang:1.3-cross go build -v
```
Alternatively, you can build for multiple platforms at once: Alternatively, you can build for multiple platforms at once:
docker run --rm -it -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3-cross bash ```console
$ for GOOS in darwin linux; do $ docker run --rm -it -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3-cross bash
> for GOARCH in 386 amd64; do $ for GOOS in darwin linux; do
> go build -v -o myapp-$GOOS-$GOARCH > for GOARCH in 386 amd64; do
> done > go build -v -o myapp-$GOOS-$GOARCH
> done > done
> done
```

View File

@ -18,14 +18,20 @@ Note: Many configuration examples propose to put `daemon` into the `global` sect
## Create a `Dockerfile` ## Create a `Dockerfile`
FROM haproxy:1.5 ```dockerfile
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
```
Build and run: Build and run:
docker build -t my-haproxy . ```console
docker run -d --name my-running-haproxy my-haproxy $ docker build -t my-haproxy .
$ docker run -d --name my-running-haproxy my-haproxy
```
## Directly via bind mount ## Directly via bind mount
docker run -d --name my-running-haproxy -v /path/to/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.5 ```console
$ docker run -d --name my-running-haproxy -v /path/to/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.5
```

View File

@ -23,40 +23,46 @@ The most recent GHC release in the 7.8 series is also available, though no longe
Start an interactive interpreter session with `ghci`: Start an interactive interpreter session with `ghci`:
$ docker run -it --rm haskell:7.10 ```console
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help $ docker run -it --rm haskell:7.10
Prelude> GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help
Prelude>
```
Dockerize a [Hackage](http://hackage.haskell.org) app with a Dockerfile inheriting from the base image: Dockerize a [Hackage](http://hackage.haskell.org) app with a Dockerfile inheriting from the base image:
FROM haskell:7.8 ```dockerfile
RUN cabal update && cabal install MazesOfMonad FROM haskell:7.8
VOLUME /root/.MazesOfMonad RUN cabal update && cabal install MazesOfMonad
ENTRYPOINT ["/root/.cabal/bin/mazesofmonad"] VOLUME /root/.MazesOfMonad
ENTRYPOINT ["/root/.cabal/bin/mazesofmonad"]
```
Iteratively develop then ship a Haskell app with a Dockerfile utilizing the build cache: Iteratively develop then ship a Haskell app with a Dockerfile utilizing the build cache:
FROM haskell:7.8 ```dockerfile
FROM haskell:7.8
RUN cabal update
RUN cabal update
# Add .cabal file
ADD ./server/snap-example.cabal /opt/server/snap-example.cabal # Add .cabal file
ADD ./server/snap-example.cabal /opt/server/snap-example.cabal
# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies # Docker will cache this command as a layer, freeing us up to
RUN cd /opt/server && cabal install --only-dependencies -j4 # modify source code without re-installing dependencies
RUN cd /opt/server && cabal install --only-dependencies -j4
# Add and Install Application Code
ADD ./server /opt/server # Add and Install Application Code
RUN cd /opt/server && cabal install ADD ./server /opt/server
RUN cd /opt/server && cabal install
# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH # Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH
# Default Command for Container
WORKDIR /opt/server # Default Command for Container
CMD ["snap-example"] WORKDIR /opt/server
CMD ["snap-example"]
```
### Examples ### Examples

View File

@ -1,31 +1,33 @@
# Example output # Example output
$ docker run hello-world ```console
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly. Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon. To generate this message, Docker took the following steps:
2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 1. The Docker client contacted the Docker daemon.
3. The Docker daemon created a new container from that image which runs the 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
executable that produces the output you are currently reading. 3. The Docker daemon created a new container from that image which runs the
4. The Docker daemon streamed that output to the Docker client, which sent it executable that produces the output you are currently reading.
to your terminal. 4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/userguide/ For more examples and ideas, visit:
https://docs.docker.com/userguide/
$ docker images hello-world
REPOSITORY TAG IMAGE ID VIRTUAL SIZE $ docker images hello-world
hello-world latest af340544ed62 960 B REPOSITORY TAG IMAGE ID VIRTUAL SIZE
hello-world latest af340544ed62 960 B
```
%%LOGO%% %%LOGO%%

View File

@ -8,13 +8,13 @@ exec > "$(dirname "$(readlink -f "$BASH_SOURCE")")/content.md"
echo '# Example output' echo '# Example output'
echo echo
{ echo '```console'
echo '$ docker run hello-world' echo '$ docker run hello-world'
docker run --rm hello-world docker run --rm hello-world
echo echo
echo '$ docker images hello-world' echo '$ docker images hello-world'
docker images hello-world | awk -F' +' 'NR == 1 || $2 == "latest" { print $1"\t"$2"\t"$3"\t"$5 }' | column -t -s$'\t' docker images hello-world | awk -F' +' 'NR == 1 || $2 == "latest" { print $1"\t"$2"\t"$3"\t"$5 }' | column -t -s$'\t'
} | sed 's/^/\t/' echo '```'
echo echo
echo '%%LOGO%%' echo '%%LOGO%%'

View File

@ -12,26 +12,34 @@ This image only contains Apache httpd with the defaults from upstream. There is
### Create a `Dockerfile` in your project ### Create a `Dockerfile` in your project
FROM httpd:2.4 ```dockerfile
COPY ./public-html/ /usr/local/apache2/htdocs/ FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
```
Then, run the commands to build and run the Docker image: Then, run the commands to build and run the Docker image:
docker build -t my-apache2 . ```console
docker run -it --rm --name my-running-app my-apache2 $ docker build -t my-apache2 .
$ docker run -it --rm --name my-running-app my-apache2
```
### Without a `Dockerfile` ### Without a `Dockerfile`
If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following: If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4 ```console
$ docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
```
### Configuration ### Configuration
To customize the configuration of the httpd server, just `COPY` your custom configuration in as `/usr/local/apache2/conf/httpd.conf`. To customize the configuration of the httpd server, just `COPY` your custom configuration in as `/usr/local/apache2/conf/httpd.conf`.
FROM httpd:2.4 ```dockerfile
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
```
#### SSL/HTTPS #### SSL/HTTPS

View File

@ -10,18 +10,24 @@ Hy (a.k.a., Hylang) is a dialect of the Lisp programming language designed to in
## Create a `Dockerfile` in your Hy project ## Create a `Dockerfile` in your Hy project
FROM hylang:0.10 ```dockerfile
COPY . /usr/src/myapp FROM hylang:0.10
WORKDIR /usr/src/myapp COPY . /usr/src/myapp
CMD [ "hy", "./your-daemon-or-script.hy" ] WORKDIR /usr/src/myapp
CMD [ "hy", "./your-daemon-or-script.hy" ]
```
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-hylang-app ```console
docker run -it --rm --name my-running-app my-hylang-app $ docker build -t my-hylang-app
$ docker run -it --rm --name my-running-app my-hylang-app
```
## Run a single Hy script ## Run a single Hy script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Hy script by using the Hy Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Hy script by using the Hy Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp hylang:0.10 hy your-daemon-or-script.hy ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp hylang:0.10 hy your-daemon-or-script.hy
```

View File

@ -14,17 +14,23 @@ This project aims to continue development of io.js under an "open governance mod
If you want to distribute your application on the docker registry, create a `Dockerfile` in the root of application directory: If you want to distribute your application on the docker registry, create a `Dockerfile` in the root of application directory:
FROM iojs:onbuild ```dockerfile
FROM iojs:onbuild
# Expose the ports that your app uses. For example:
EXPOSE 8080 # Expose the ports that your app uses. For example:
EXPOSE 8080
```
Then simply run: Then simply run:
$ docker build -t iojs-app ```console
... $ docker build -t iojs-app
$ docker run --rm -it iojs-app ...
$ docker run --rm -it iojs-app
```
To run a single script, you can mount it in a volume under `/usr/src/app`. From the root of your application directory (assuming your script is named `index.js`): To run a single script, you can mount it in a volume under `/usr/src/app`. From the root of your application directory (assuming your script is named `index.js`):
$ docker run -v "$PWD":/usr/src/app -w /usr/src/app -it --rm iojs iojs index.js ```console
$ docker run -v "$PWD":/usr/src/app -w /usr/src/app -it --rm iojs iojs index.js
```

View File

@ -16,22 +16,28 @@ Be sure to also checkout the [awesome scripts](https://github.com/irssi/scripts.
On a Linux system, build and launch a container named `my-running-irssi` like this: On a Linux system, build and launch a container named `my-running-irssi` like this:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \ ```console
-v $HOME/.irssi:/home/user/.irssi:ro \ $ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v /etc/localtime:/etc/localtime:ro \ -v $HOME/.irssi:/home/user/.irssi:ro \
irssi -v /etc/localtime:/etc/localtime:ro \
irssi
```
On a Mac OS X system, run the same image using: On a Mac OS X system, run the same image using:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \ ```console
-v $HOME/.irssi:/home/user/.irssi:ro \ $ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
irssi -v $HOME/.irssi:/home/user/.irssi:ro \
irssi
```
You omit `/etc/localtime` on Mac OS X because `boot2docker` doesn't use this file. You omit `/etc/localtime` on Mac OS X because `boot2docker` doesn't use this file.
Of course, you can name your image anything you like. In Docker 1.5 you can also use the `--read-only` mount flag. For example, on Linux: Of course, you can name your image anything you like. In Docker 1.5 you can also use the `--read-only` mount flag. For example, on Linux:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \ ```console
--read-only -v $HOME/.irssi:/home/user/.irssi \ $ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v /etc/localtime:/etc/localtime \ --read-only -v $HOME/.irssi:/home/user/.irssi \
irssi -v /etc/localtime:/etc/localtime \
irssi
```

View File

@ -14,22 +14,28 @@ Java is a registered trademark of Oracle and/or its affiliates.
The most straightforward way to use this image is to use a Java container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project: The most straightforward way to use this image is to use a Java container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM java:7 ```dockerfile
COPY . /usr/src/myapp FROM java:7
WORKDIR /usr/src/myapp COPY . /usr/src/myapp
RUN javac Main.java WORKDIR /usr/src/myapp
CMD ["java", "Main"] RUN javac Main.java
CMD ["java", "Main"]
```
You can then run and build the Docker image: You can then run and build the Docker image:
docker build -t my-java-app . ```console
docker run -it --rm --name my-running-app my-java-app $ docker build -t my-java-app .
$ docker run -it --rm --name my-running-app my-java-app
```
## Compile your app inside the Docker container ## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like: There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp java:7 javac Main.java ```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp java:7 javac Main.java
```
This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `javac Main.java` which will tell Java to compile the code in `Main.java` and output the Java class file to `Main.class`. This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `javac Main.java` which will tell Java to compile the code in `Main.java` and output the Java class file to `Main.class`.

View File

@ -8,11 +8,15 @@ This is a fully functional Jenkins server, based on the Long Term Support releas
# How to use this image # How to use this image
docker run -p 8080:8080 jenkins ```console
$ docker run -p 8080:8080 jenkins
```
This will store the workspace in /var/jenkins_home. All Jenkins data lives in there - including plugins and configuration. You will probably want to make that a persistent volume: This will store the workspace in /var/jenkins_home. All Jenkins data lives in there - including plugins and configuration. You will probably want to make that a persistent volume:
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins ```console
$ docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
```
The volume for the "myjenkins" named container will then be persistent. The volume for the "myjenkins" named container will then be persistent.
@ -20,7 +24,9 @@ You can also bind mount in a volume from the host:
First, ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 102 normally - or use -u root), then: First, ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 102 normally - or use -u root), then:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins ```console
$ docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins
```
## Backing up data ## Backing up data

View File

@ -10,11 +10,15 @@ Jetty is a pure Java-based HTTP (Web) server and Java Servlet container. While W
Run the default Jetty server (`CMD ["jetty.sh", "run"]`): Run the default Jetty server (`CMD ["jetty.sh", "run"]`):
docker run -d %%REPO%%:9 ```console
$ docker run -d %%REPO%%:9
```
You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888: You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888:
docker run -d -p 8888:8080 %%REPO%%:9 ```console
$ docker run -d -p 8888:8080 %%REPO%%:9
```
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser. You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser.
@ -38,7 +42,9 @@ For older EOL'd images based on Jetty 7 or Jetty 8, please follow the [legacy in
To run `%%REPO%%` as a read-only container, have Docker create the `/tmp/jetty` and `/run/jetty` directories as volumes: To run `%%REPO%%` as a read-only container, have Docker create the `/tmp/jetty` and `/run/jetty` directories as volumes:
docker run -d --read-only -v /tmp/jetty -v /run/jetty %%REPO%%:9 ```console
$ docker run -d --read-only -v /tmp/jetty -v /run/jetty %%REPO%%:9
```
Since the container is read-only, you'll need to either mount in your webapps directory with `-v /path/to/my/webapps:/var/lib/jetty/webapps` or by populating `/var/lib/jetty/webapps` in a derived image. Since the container is read-only, you'll need to either mount in your webapps directory with `-v /path/to/my/webapps:/var/lib/jetty/webapps` or by populating `/var/lib/jetty/webapps` in a derived image.
@ -48,4 +54,6 @@ By default, this image starts as user `root` and uses Jetty's `setuid` module to
If you would like the image to start immediately as user `jetty` instead of starting as `root`, you can start the container with `-u jetty`: If you would like the image to start immediately as user `jetty` instead of starting as `root`, you can start the container with `-u jetty`:
docker run -d -u jetty %%REPO%%:9 ```console
$ docker run -d -u jetty %%REPO%%:9
```

View File

@ -8,7 +8,9 @@ Joomla is a free and open-source content management system (CMS) for publishing
# How to use this image # How to use this image
docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%% ```console
$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
```
The following environment variables are also honored for configuring your Joomla instance: The following environment variables are also honored for configuring your Joomla instance:
@ -21,14 +23,18 @@ If the `JOOMLA_DB_NAME` specified does not already exist on the given MySQL serv
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%REPO%% ```console
$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%REPO%%
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser. Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
If you'd like to use an external database instead of a linked `mysql` container, specify the hostname and port with `JOOMLA_DB_HOST` along with the password in `JOOMLA_DB_PASSWORD` and the username in `JOOMLA_DB_USER` (if it is something other than `root`): If you'd like to use an external database instead of a linked `mysql` container, specify the hostname and port with `JOOMLA_DB_HOST` along with the password in `JOOMLA_DB_PASSWORD` and the username in `JOOMLA_DB_USER` (if it is something other than `root`):
docker run --name some-%%REPO%% -e JOOMLA_DB_HOST=10.1.2.3:3306 \ ```console
-e JOOMLA_DB_USER=... -e JOOMLA_DB_PASSWORD=... -d %%REPO%% $ docker run --name some-%%REPO%% -e JOOMLA_DB_HOST=10.1.2.3:3306 \
-e JOOMLA_DB_USER=... -e JOOMLA_DB_PASSWORD=... -d %%REPO%%
```
## %%COMPOSE%% ## %%COMPOSE%%

View File

@ -14,8 +14,10 @@ JRuby leverages the robustness and speed of the JVM while providing the same Rub
## Create a `Dockerfile` in your Ruby app project ## Create a `Dockerfile` in your Ruby app project
FROM jruby:1.7-onbuild ```dockerfile
CMD ["./your-daemon-or-script.rb"] FROM jruby:1.7-onbuild
CMD ["./your-daemon-or-script.rb"]
```
Put this file in the root of your app, next to the `Gemfile`. Put this file in the root of your app, next to the `Gemfile`.
@ -23,17 +25,23 @@ This image includes multiple `ONBUILD` triggers which should be all you need to
You can then build and run the Ruby image: You can then build and run the Ruby image:
docker build -t my-ruby-app . ```console
docker run -it --name my-running-script my-ruby-app $ docker build -t my-ruby-app .
$ docker run -it --name my-running-script my-ruby-app
```
### Generate a `Gemfile.lock` ### Generate a `Gemfile.lock`
The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`: The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle install --system ```console
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle install --system
```
## Run a single Ruby script ## Run a single Ruby script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp jruby:1.7 jruby your-daemon-or-script.rb ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp jruby:1.7 jruby your-daemon-or-script.rb
```

View File

@ -12,8 +12,12 @@ Julia is a high-level, high-performance dynamic programming language for technic
Starting the Julia REPL is as easy as the following: Starting the Julia REPL is as easy as the following:
docker run -it --rm julia ```console
$ docker run -it --rm julia
```
## Run Julia script from your local directory inside container ## Run Julia script from your local directory inside container
docker run -it --rm -v "$PWD":/usr/myapp -w /usr/myapp julia julia script.jl arg1 arg2 ```console
$ docker run -it --rm -v "$PWD":/usr/myapp -w /usr/myapp julia julia script.jl arg1 arg2
```

View File

@ -10,7 +10,9 @@ The Kaazing Gateway is a network gateway created to provide a single access poin
By default the gateway runs a WebSocket echo service similar to [websocket.org](https://www.websocket.org/echo.html). By default the gateway runs a WebSocket echo service similar to [websocket.org](https://www.websocket.org/echo.html).
docker run --name some-kaazing-gateway -h somehostname -d -p 8000:8000 kaazing-gateway ```console
$ docker run --name some-kaazing-gateway -h somehostname -d -p 8000:8000 kaazing-gateway
```
You should then be able to connect to ws://somehostname:8000 from the [WebSocket echo test](https://www.websocket.org/echo.html). You should then be able to connect to ws://somehostname:8000 from the [WebSocket echo test](https://www.websocket.org/echo.html).
@ -20,19 +22,27 @@ Note: this assumes that `somehostname` is resolvable from your browser, you may
To launch a container with a specific configuration you can do the following: To launch a container with a specific configuration you can do the following:
docker run --name some-kaazing-gateway -v /some/gateway-config.xml:/kaazing-gateway/conf/gateway-config.xml:ro -d kaazing-gateway ```console
$ docker run --name some-kaazing-gateway -v /some/gateway-config.xml:/kaazing-gateway/conf/gateway-config.xml:ro -d kaazing-gateway
```
For information on the syntax of the Kaazing Gateway configuration files, see [the official documentation](http://developer.kaazing.com/documentation/5.0/index.html) (specifically the [Configuration Guide](http://developer.kaazing.com/documentation/5.0/admin-reference/r_conf_elementindex.html)). For information on the syntax of the Kaazing Gateway configuration files, see [the official documentation](http://developer.kaazing.com/documentation/5.0/index.html) (specifically the [Configuration Guide](http://developer.kaazing.com/documentation/5.0/admin-reference/r_conf_elementindex.html)).
If you wish to adapt the default Gateway configuration file, you can use a command such as the following to copy the file from a running Kaazing Gateway container: If you wish to adapt the default Gateway configuration file, you can use a command such as the following to copy the file from a running Kaazing Gateway container:
docker cp some-kaazing:/conf/gateway-config-minimal.xml /some/gateway-config.xml ```console
$ docker cp some-kaazing:/conf/gateway-config-minimal.xml /some/gateway-config.xml
```
As above, this can also be accomplished more cleanly using a simple `Dockerfile`: As above, this can also be accomplished more cleanly using a simple `Dockerfile`:
FROM kaazing-gateway ```dockerfile
COPY gateway-config.xml /conf/gateway-config.xml FROM kaazing-gateway
COPY gateway-config.xml /conf/gateway-config.xml
```
Then, build with `docker build -t some-custom-kaazing-gateway .` and run: Then, build with `docker build -t some-custom-kaazing-gateway .` and run:
docker run --name some-kaazing-gateway -d some-custom-kaazing-gateway ```console
$ docker run --name some-kaazing-gateway -d some-custom-kaazing-gateway
```

View File

@ -12,18 +12,26 @@ Kibana is a registered trademark of Elasticsearch BV.
You can run the default `%%REPO%%` command simply: You can run the default `%%REPO%%` command simply:
docker run --link some-elasticsearch:elasticsearch -d %%REPO%% ```console
$ docker run --link some-elasticsearch:elasticsearch -d %%REPO%%
```
You can also pass in additional flags to `%%REPO%%`: You can also pass in additional flags to `%%REPO%%`:
docker run --link some-elasticsearch:elasticsearch -d %%REPO%% --plugins /somewhere/else ```console
$ docker run --link some-elasticsearch:elasticsearch -d %%REPO%% --plugins /somewhere/else
```
This image includes `EXPOSE 5601` ([default `port`](https://www.elastic.co/guide/en/kibana/current/_setting_kibana_server_properties.html)). If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: This image includes `EXPOSE 5601` ([default `port`](https://www.elastic.co/guide/en/kibana/current/_setting_kibana_server_properties.html)). If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-%%REPO%% --link some-elasticsearch:elasticsearch -p 5601:5601 -d %%REPO%% ```console
$ docker run --name some-%%REPO%% --link some-elasticsearch:elasticsearch -p 5601:5601 -d %%REPO%%
```
You can also provide the address of elasticsearch via `ELASTICSEARCH_URL` environnement variable: You can also provide the address of elasticsearch via `ELASTICSEARCH_URL` environnement variable:
docker run --name some-kibana -e ELASTICSEARCH_URL=http://some-elasticsearch:9200 -p 5601:5601 -d kibana ```console
$ docker run --name some-kibana -e ELASTICSEARCH_URL=http://some-elasticsearch:9200 -p 5601:5601 -d kibana
```
Then, access it via `http://localhost:5601` or `http://host-ip:5601` in a browser. Then, access it via `http://localhost:5601` or `http://host-ip:5601` in a browser.

View File

@ -12,10 +12,14 @@ Logstash is a tool that can be used to collect, process and forward events and l
If you need to run logstash with configuration provided on the commandline, you can use the logstash image as follows: If you need to run logstash with configuration provided on the commandline, you can use the logstash image as follows:
docker run -it --rm logstash logstash -e 'input { stdin { } } output { stdout { } }' ```console
$ docker run -it --rm logstash logstash -e 'input { stdin { } } output { stdout { } }'
```
## Start Logstash with configuration file ## Start Logstash with configuration file
If you need to run logstash with a configuration file, `logstash.conf`, that's located in your current directory, you can use the logstash image as follows: If you need to run logstash with a configuration file, `logstash.conf`, that's located in your current directory, you can use the logstash image as follows:
docker run -it --rm -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf ```console
$ docker run -it --rm -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf
```

View File

@ -18,9 +18,11 @@ To date, Mageia:
## Create a Dockerfile for your container ## Create a Dockerfile for your container
FROM mageia:4 ```dockerfile
MAINTAINER "Foo Bar" <foo@bar.com> FROM mageia:4
CMD [ "bash" ] MAINTAINER "Foo Bar" <foo@bar.com>
CMD [ "bash" ]
```
## Installed packages ## Installed packages

View File

@ -12,7 +12,9 @@ The intent is also to maintain high compatibility with MySQL, ensuring a "drop-i
## start a `%%REPO%%` server instance ## start a `%%REPO%%` server instance
docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=mysecretpassword -d %%REPO%% ```console
$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=mysecretpassword -d %%REPO%%
```
This image includes `EXPOSE 3306` (the standard MySQL port), so container linking will make it automatically available to the linked containers (as the following examples illustrate). This image includes `EXPOSE 3306` (the standard MySQL port), so container linking will make it automatically available to the linked containers (as the following examples illustrate).
@ -20,11 +22,15 @@ This image includes `EXPOSE 3306` (the standard MySQL port), so container linkin
Since MariaDB is intended as a drop-in replacement for MySQL, it can be used with many applications. Since MariaDB is intended as a drop-in replacement for MySQL, it can be used with many applications.
docker run --name some-app --link some-%%REPO%%:mysql -d application-that-uses-mysql ```console
$ docker run --name some-app --link some-%%REPO%%:mysql -d application-that-uses-mysql
```
## ... or via `mysql` ## ... or via `mysql`
docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' ```console
$ docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```
## Environment Variables ## Environment Variables

View File

@ -8,8 +8,10 @@
## Create a Dockerfile in your Maven project ## Create a Dockerfile in your Maven project
FROM maven:3.2-jdk-7-onbuild ```dockerfile
CMD ["do-something-with-built-packages"] FROM maven:3.2-jdk-7-onbuild
CMD ["do-something-with-built-packages"]
```
Put this file in the root of your project, next to the pom.xml. Put this file in the root of your project, next to the pom.xml.
@ -17,11 +19,15 @@ This image includes multiple ONBUILD triggers which should be all you need to bo
You can then build and run the image: You can then build and run the image:
docker build -t my-maven . ```console
docker run -it --name my-maven-script my-maven $ docker build -t my-maven .
$ docker run -it --name my-maven-script my-maven
```
## Run a single Maven command ## Run a single Maven command
For many simple projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Maven project by using the Maven Docker image directly, passing a Maven command to `docker run`: For many simple projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Maven project by using the Maven Docker image directly, passing a Maven command to `docker run`:
docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven maven:3.2-jdk-7 mvn clean install ```console
$ docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven maven:3.2-jdk-7 mvn clean install
```

View File

@ -8,17 +8,23 @@ Memcached's APIs provide a very large hash table distributed across multiple mac
# How to use this image # How to use this image
docker run --name my-memcache -d memcached ```console
$ docker run --name my-memcache -d memcached
```
Start your memcached container with the above command and then you can connect you app to it with standard linking: Start your memcached container with the above command and then you can connect you app to it with standard linking:
docker run --link my-memcache:memcache -d my-app-image ```console
$ docker run --link my-memcache:memcache -d my-app-image
```
The memcached server information would then be available through the ENV variables generated by the link as well as through DNS as `memcache` from `/etc/hosts`. The memcached server information would then be available through the ENV variables generated by the link as well as through DNS as `memcache` from `/etc/hosts`.
How to set the memory usage for memcached How to set the memory usage for memcached
docker run --name my-memcache -d memcached memcached -m 64 ```console
$ docker run --name my-memcache -d memcached memcached -m 64
```
This would set the memcache server to use use 64 megabytes for storage. This would set the memcache server to use use 64 megabytes for storage.

View File

@ -12,17 +12,23 @@ First developed by the software company 10gen (now MongoDB Inc.) in October 2007
## start a mongo instance ## start a mongo instance
docker run --name some-mongo -d mongo ```console
$ docker run --name some-mongo -d mongo
```
This image includes `EXPOSE 27017` (the mongo port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate). This image includes `EXPOSE 27017` (the mongo port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## connect to it from an application ## connect to it from an application
docker run --name some-app --link some-mongo:mongo -d application-that-uses-mongo ```console
$ docker run --name some-app --link some-mongo:mongo -d application-that-uses-mongo
```
## ... or via `mongo` ## ... or via `mongo`
docker run -it --link some-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"' ```console
$ docker run -it --link some-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
```
## Configuration ## Configuration
@ -30,7 +36,9 @@ See the [official docs](http://docs.mongodb.org/manual/) for infomation on using
Just add the `--storageEngine` argument if you want to use the WiredTiger storage engine in MongoDB 3.0 and above without making a config file. Be sure to check the [docs](http://docs.mongodb.org/manual/release-notes/3.0-upgrade/#change-storage-engine-to-wiredtiger) on how to upgrade from older versions. Just add the `--storageEngine` argument if you want to use the WiredTiger storage engine in MongoDB 3.0 and above without making a config file. Be sure to check the [docs](http://docs.mongodb.org/manual/release-notes/3.0-upgrade/#change-storage-engine-to-wiredtiger) on how to upgrade from older versions.
docker run --name some-mongo -d mongo --storageEngine=wiredTiger ```console
$ docker run --name some-mongo -d mongo --storageEngine=wiredTiger
```
## Where to Store Data ## Where to Store Data
@ -46,10 +54,14 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`. 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `%%REPO%%` container like this: 2. Start your `%%REPO%%` container like this:
docker run --name some-%%REPO%% -v /my/own/datadir:/data/db -d %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -v /my/own/datadir:/data/db -d %%REPO%%:tag
```
The `-v /my/own/datadir:/data/db` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/data/db` inside the container, where MongoDB by default will write its data files. The `-v /my/own/datadir:/data/db` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/data/db` inside the container, where MongoDB by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it: Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir ```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```

View File

@ -15,8 +15,10 @@ This image will run stand-alone Mono console apps.
This example Dockerfile will run an executable called `TestingConsoleApp.exe`. This example Dockerfile will run an executable called `TestingConsoleApp.exe`.
FROM mono:3.10-onbuild ```dockerfile
CMD [ "mono", "./TestingConsoleApp.exe" ] FROM mono:3.10-onbuild
CMD [ "mono", "./TestingConsoleApp.exe" ]
```
Place this file in the root of your app, next to the `.sln` solution file. Modify the exectuable name to match what you want to run. Place this file in the root of your app, next to the `.sln` solution file. Modify the exectuable name to match what you want to run.
@ -24,8 +26,10 @@ This image includes `ONBUILD` triggers that adds your app source code to `/usr/s
With the Dockerfile in place, you can build and run a Docker image with your app: With the Dockerfile in place, you can build and run a Docker image with your app:
docker build -t my-app . ```console
docker run my-app $ docker build -t my-app .
$ docker run my-app
```
You should see any output from your app now. You should see any output from your app now.

View File

@ -12,7 +12,9 @@ For more information and related downloads for MySQL Server and other MySQL prod
Starting a MySQL instance is simple: Starting a MySQL instance is simple:
docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
```
... where `some-%%REPO%%` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags. ... where `some-%%REPO%%` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags.
@ -20,13 +22,17 @@ Starting a MySQL instance is simple:
This image exposes the standard MySQL port (3306), so container linking makes the MySQL instance available to other application containers. Start your application container like this in order to link it to the MySQL container: This image exposes the standard MySQL port (3306), so container linking makes the MySQL instance available to other application containers. Start your application container like this in order to link it to the MySQL container:
docker run --name some-app --link some-%%REPO%%:mysql -d app-that-uses-mysql ```console
$ docker run --name some-app --link some-%%REPO%%:mysql -d app-that-uses-mysql
```
## Connect to MySQL from the MySQL command line client ## Connect to MySQL from the MySQL command line client
The following command starts another MySQL container instance and runs the `mysql` command line client against your original MySQL container, allowing you to execute SQL statements against your database instance: The following command starts another MySQL container instance and runs the `mysql` command line client against your original MySQL container, allowing you to execute SQL statements against your database instance:
docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' ```console
$ docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```
... where `some-%%REPO%%` is the name of your original MySQL Server container. ... where `some-%%REPO%%` is the name of your original MySQL Server container.
@ -36,11 +42,15 @@ More information about the MySQL command line client can be found in the [MySQL
The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container: The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container:
docker exec -it some-%%REPO%% bash ```console
$ docker exec -it some-%%REPO%% bash
```
The MySQL Server log is available through Docker's container log: The MySQL Server log is available through Docker's container log:
docker logs some-%%REPO%% ```console
$ docker logs some-%%REPO%%
```
## Using a custom MySQL configuration file ## Using a custom MySQL configuration file
@ -48,13 +58,17 @@ The MySQL startup configuration is specified in the file `/etc/mysql/my.cnf`, an
If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%REPO%%` container like this (note that only the directory path of the custom config file is used in this command): If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%REPO%%` container like this (note that only the directory path of the custom config file is used in this command):
docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
```
This will start a new container `some-%%REPO%%` where the MySQL instance uses the combined startup settings from `/etc/mysql/my.cnf` and `/etc/mysql/conf.d/config-file.cnf`, with settings from the latter taking precedence. This will start a new container `some-%%REPO%%` where the MySQL instance uses the combined startup settings from `/etc/mysql/my.cnf` and `/etc/mysql/conf.d/config-file.cnf`, with settings from the latter taking precedence.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it: Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it:
chcon -Rt svirt_sandbox_file_t /my/custom ```console
$ chcon -Rt svirt_sandbox_file_t /my/custom
```
## Environment Variables ## Environment Variables
@ -92,13 +106,17 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`. 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `%%REPO%%` container like this: 2. Start your `%%REPO%%` container like this:
docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag ```console
$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
```
The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files. The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it: Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir ```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```
## No connections until MySQL init completes ## No connections until MySQL init completes

View File

@ -18,7 +18,9 @@ The `neurodebian:latest` tag will always point the Neurodebian-enabled latest st
NeuroDebian APT file is installed under `/etc/apt/sources.list.d/neurodebian.sources.list` and currently enables only `main` (DFSG-compliant) area of the archive: NeuroDebian APT file is installed under `/etc/apt/sources.list.d/neurodebian.sources.list` and currently enables only `main` (DFSG-compliant) area of the archive:
> docker run neurodebian:latest cat /etc/apt/sources.list.d/neurodebian.sources.list ```console
deb http://neuro.debian.net/debian wheezy main $ docker run neurodebian:latest cat /etc/apt/sources.list.d/neurodebian.sources.list
deb http://neuro.debian.net/debian data main deb http://neuro.debian.net/debian wheezy main
#deb-src http://neuro.debian.net/debian-devel wheezy main deb http://neuro.debian.net/debian data main
#deb-src http://neuro.debian.net/debian-devel wheezy main
```

View File

@ -10,26 +10,36 @@ Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, H
## hosting some simple static content ## hosting some simple static content
docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx ```console
$ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
```
Alternatively, a simple `Dockerfile` can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above): Alternatively, a simple `Dockerfile` can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above):
FROM nginx ```dockerfile
COPY static-html-directory /usr/share/nginx/html FROM nginx
COPY static-html-directory /usr/share/nginx/html
```
Place this file in the same directory as your directory of content ("static-html-directory"), run `docker build -t some-content-nginx .`, then start your container: Place this file in the same directory as your directory of content ("static-html-directory"), run `docker build -t some-content-nginx .`, then start your container:
docker run --name some-nginx -d some-content-nginx ```console
$ docker run --name some-nginx -d some-content-nginx
```
## exposing the port ## exposing the port
docker run --name some-nginx -d -p 8080:80 some-content-nginx ```console
$ docker run --name some-nginx -d -p 8080:80 some-content-nginx
```
Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browser. Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browser.
## complex configuration ## complex configuration
docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx ```console
$ docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
```
For information on the syntax of the Nginx configuration files, see [the official documentation](http://nginx.org/en/docs/) (specifically the [Beginner's Guide](http://nginx.org/en/docs/beginners_guide.html#conf_structure)). For information on the syntax of the Nginx configuration files, see [the official documentation](http://nginx.org/en/docs/) (specifically the [Beginner's Guide](http://nginx.org/en/docs/beginners_guide.html#conf_structure)).
@ -37,13 +47,19 @@ Be sure to include `daemon off;` in your custom configuration to ensure that Ngi
If you wish to adapt the default configuration, use something like the following to copy it from a running Nginx container: If you wish to adapt the default configuration, use something like the following to copy it from a running Nginx container:
docker cp some-nginx:/etc/nginx/nginx.conf /some/nginx.conf ```console
$ docker cp some-nginx:/etc/nginx/nginx.conf /some/nginx.conf
```
As above, this can also be accomplished more cleanly using a simple `Dockerfile`: As above, this can also be accomplished more cleanly using a simple `Dockerfile`:
FROM nginx ```dockerfile
COPY nginx.conf /etc/nginx/nginx.conf FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
```
Then, build with `docker build -t some-custom-nginx .` and run: Then, build with `docker build -t some-custom-nginx .` and run:
docker run --name some-nginx -d some-custom-nginx ```console
$ docker run --name some-nginx -d some-custom-nginx
```

View File

@ -14,14 +14,18 @@ Node.js internally uses the Google V8 JavaScript engine to execute code; a large
## Create a `Dockerfile` in your Node.js app project ## Create a `Dockerfile` in your Node.js app project
FROM node:0.10-onbuild ```dockerfile
# replace this with your application's default port FROM node:0.10-onbuild
EXPOSE 8888 # replace this with your application's default port
EXPOSE 8888
```
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-nodejs-app . ```console
docker run -it --rm --name my-running-app my-nodejs-app $ docker build -t my-nodejs-app .
$ docker run -it --rm --name my-running-app my-nodejs-app
```
### Notes ### Notes
@ -31,4 +35,6 @@ The image assumes that your application has a file named [`package.json`](https:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Node.js script by using the Node.js Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Node.js script by using the Node.js Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp node:0.10 node your-daemon-or-script.js ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp node:0.10 node your-daemon-or-script.js
```

View File

@ -12,18 +12,24 @@ This image requires a running PostgreSQL server.
## Start a PostgreSQL server ## Start a PostgreSQL server
docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres ```console
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres
```
## Start an Odoo instance ## Start an Odoo instance
docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo ```console
$ docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```
The alias of the container running Postgres must be db for Odoo to be able to connect to the Postgres server. The alias of the container running Postgres must be db for Odoo to be able to connect to the Postgres server.
## Stop and restart an Odoo instance ## Stop and restart an Odoo instance
docker stop odoo ```console
docker start -a odoo $ docker stop odoo
$ docker start -a odoo
```
## Stop and restart a PostgreSQL server ## Stop and restart a PostgreSQL server
@ -35,24 +41,32 @@ Restarting a PostgreSQL server does not affect the created databases.
The default configuration file for the server (located at `/etc/odoo/openerp-server.conf`) can be overriden at startup using volumes. Suppose you have a custom configuration at `/path/to/config/openerp-server.conf`, then The default configuration file for the server (located at `/etc/odoo/openerp-server.conf`) can be overriden at startup using volumes. Suppose you have a custom configuration at `/path/to/config/openerp-server.conf`, then
docker run -v /path/to/config:/etc/odoo -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo ```console
$ docker run -v /path/to/config:/etc/odoo -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```
Please use [this configuration template](https://github.com/odoo/docker/blob/master/8.0/openerp-server.conf) to write your custom configuration as we already set some arguments for running Odoo inside a Docker container. Please use [this configuration template](https://github.com/odoo/docker/blob/master/8.0/openerp-server.conf) to write your custom configuration as we already set some arguments for running Odoo inside a Docker container.
You can also directly specify Odoo arguments inline. Those arguments must be given after the keyword `--` in the command-line, as follows You can also directly specify Odoo arguments inline. Those arguments must be given after the keyword `--` in the command-line, as follows
docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo -- --dbfilter=odoo_db_.* ```console
$ docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo -- --dbfilter=odoo_db_.*
```
## Mount custom addons ## Mount custom addons
You can mount your own Odoo addons within the Odoo container, at `/mnt/extra-addons` You can mount your own Odoo addons within the Odoo container, at `/mnt/extra-addons`
docker run -v /path/to/addons:/mnt/extra-addons -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo ```console
$ docker run -v /path/to/addons:/mnt/extra-addons -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```
## Run multiple Odoo instances ## Run multiple Odoo instances
docker run -p 127.0.0.1:8070:8069 --name odoo2 --link db:db -t odoo ```console
docker run -p 127.0.0.1:8071:8069 --name odoo3 --link db:db -t odoo $ docker run -p 127.0.0.1:8070:8069 --name odoo2 --link db:db -t odoo
$ docker run -p 127.0.0.1:8071:8069 --name odoo3 --link db:db -t odoo
```
Please note that for plain use of mails and reports functionalities, when the host and container ports differ (e.g. 8070 and 8069), one has to set, in Odoo, Settings->Parameters->System Parameters (requires technical features), web.base.url to the container port (e.g. 127.0.0.1:8069). Please note that for plain use of mails and reports functionalities, when the host and container ports differ (e.g. 8070 and 8069), one has to set, in Odoo, Settings->Parameters->System Parameters (requires technical features), web.base.url to the container port (e.g. 127.0.0.1:8069).
@ -62,6 +76,8 @@ Suppose you created a database from an Odoo instance named old-odoo, and you wan
By default, Odoo 8.0 uses a filestore (located at /var/lib/odoo/filestore/) for attachments. You should restore this filestore in your new Odoo instance by running By default, Odoo 8.0 uses a filestore (located at /var/lib/odoo/filestore/) for attachments. You should restore this filestore in your new Odoo instance by running
docker run --volumes-from old-odoo -p 127.0.0.1:8070:8069 --name new-odoo --link db:db -t odoo ```console
$ docker run --volumes-from old-odoo -p 127.0.0.1:8070:8069 --name new-odoo --link db:db -t odoo
```
You can also simply prevent Odoo from using the filestore by setting the system parameter `ir_attachment.location` to `db-storage` in Settings->Parameters->System Parameters (requires technical features). You can also simply prevent Odoo from using the filestore by setting the system parameter `ir_attachment.location` to `db-storage` in Settings->Parameters->System Parameters (requires technical features).

View File

@ -12,7 +12,9 @@ ownCloud is a self-hosted file sync and share server. It provides access to your
Starting the ownCloud 8.1 instance listening on port 80 is as easy as the following: Starting the ownCloud 8.1 instance listening on port 80 is as easy as the following:
docker run -d -p 80:80 owncloud:8.1 ```console
$ docker run -d -p 80:80 owncloud:8.1
```
Then go to http://localhost/ and go through the wizard. By default this container uses SQLite for data storage, but the wizard should allow for connecting to an existing database. Additionally, tags for 6.0, 7.0, or 8.0 are available. Then go to http://localhost/ and go through the wizard. By default this container uses SQLite for data storage, but the wizard should allow for connecting to an existing database. Additionally, tags for 6.0, 7.0, or 8.0 are available.

View File

@ -12,7 +12,9 @@ It aims to retain close compatibility to the official MySQL releases, while focu
## start a `%%REPO%%` server instance ## start a `%%REPO%%` server instance
docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=mysecretpassword -d %%REPO%% ```console
$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=mysecretpassword -d %%REPO%%
```
This image includes `EXPOSE 3306` (the standard MySQL port), so container linking will make it automatically available to the linked containers (as the following examples illustrate). This image includes `EXPOSE 3306` (the standard MySQL port), so container linking will make it automatically available to the linked containers (as the following examples illustrate).
@ -20,11 +22,15 @@ This image includes `EXPOSE 3306` (the standard MySQL port), so container linkin
Since Percona Server is intended as a drop-in replacement for MySQL, it can be used with many applications. Since Percona Server is intended as a drop-in replacement for MySQL, it can be used with many applications.
docker run --name some-app --link some-%%REPO%%:mysql -d application-that-uses-mysql ```console
$ docker run --name some-app --link some-%%REPO%%:mysql -d application-that-uses-mysql
```
## ... or via `mysql` ## ... or via `mysql`
docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' ```console
$ docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```
## Environment Variables ## Environment Variables

View File

@ -10,18 +10,24 @@ Perl is a high-level, general-purpose, interpreted, dynamic programming language
## Create a `Dockerfile` in your Perl app project ## Create a `Dockerfile` in your Perl app project
FROM perl:5.20 ```dockerfile
COPY . /usr/src/myapp FROM perl:5.20
WORKDIR /usr/src/myapp COPY . /usr/src/myapp
CMD [ "perl", "./your-daemon-or-script.pl" ] WORKDIR /usr/src/myapp
CMD [ "perl", "./your-daemon-or-script.pl" ]
```
Then, build and run the Docker image: Then, build and run the Docker image:
docker build -t my-perl-app . ```console
docker run -it --rm --name my-running-app my-perl-app $ docker build -t my-perl-app .
$ docker run -it --rm --name my-running-app my-perl-app
```
## Run a single Perl script ## Run a single Perl script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Perl script by using the Perl Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Perl script by using the Perl Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp perl:5.20 perl your-daemon-or-script.pl ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp perl:5.20 perl your-daemon-or-script.pl
```

View File

@ -20,21 +20,29 @@ Zend Server is shared on [Docker-Hub](https://registry.hub.docker.com/_/php-zend
- To start a Zend Server cluster, execute the following command for each cluster node: - To start a Zend Server cluster, execute the following command for each cluster node:
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend php-zendserver ```console
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend php-zendserver
```
#### Launching the Container from Dockerfile #### Launching the Container from Dockerfile
- From a local folder containing this repo's clone, execute the following command to generate the image. The **image-id** will be outputted: - From a local folder containing this repo's clone, execute the following command to generate the image. The **image-id** will be outputted:
$ docker build . ```console
$ docker build .
```
- To start a single Zend Server instance, execute: - To start a single Zend Server instance, execute:
$ docker run <image-id> ```console
$ docker run <image-id>
```
- To start a Zend Server cluster, execute the following command on each cluster node: - To start a Zend Server cluster, execute the following command on each cluster node:
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id> ```console
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
```
#### Accessing Zend server #### Accessing Zend server

View File

@ -14,21 +14,27 @@ For PHP projects run through the command line interface (CLI), you can do the fo
### Create a `Dockerfile` in your PHP project ### Create a `Dockerfile` in your PHP project
FROM php:5.6-cli ```dockerfile
COPY . /usr/src/myapp FROM php:5.6-cli
WORKDIR /usr/src/myapp COPY . /usr/src/myapp
CMD [ "php", "./your-script.php" ] WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
```
Then, run the commands to build and run the Docker image: Then, run the commands to build and run the Docker image:
docker build -t my-php-app . ```console
docker run -it --rm --name my-running-app my-php-app $ docker build -t my-php-app .
$ docker run -it --rm --name my-running-app my-php-app
```
### Run a single PHP script ### Run a single PHP script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a PHP script by using the PHP Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a PHP script by using the PHP Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:5.6-cli php your-script.php ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:5.6-cli php your-script.php
```
## With Apache ## With Apache
@ -36,19 +42,25 @@ More commonly, you will probably want to run PHP in conjunction with Apache http
### Create a `Dockerfile` in your PHP project ### Create a `Dockerfile` in your PHP project
FROM php:5.6-apache ```dockerfile
COPY src/ /var/www/html/ FROM php:5.6-apache
COPY src/ /var/www/html/
```
Where `src/` is the directory containing all your php code. Then, run the commands to build and run the Docker image: Where `src/` is the directory containing all your php code. Then, run the commands to build and run the Docker image:
docker build -t my-php-app . ```console
docker run -it --rm --name my-running-app my-php-app $ docker build -t my-php-app .
$ docker run -it --rm --name my-running-app my-php-app
```
We recommend that you add a custom `php.ini` configuration. `COPY` it into `/usr/local/etc/php` by adding one more line to the Dockerfile above and running the same commands to build and run: We recommend that you add a custom `php.ini` configuration. `COPY` it into `/usr/local/etc/php` by adding one more line to the Dockerfile above and running the same commands to build and run:
FROM php:5.6-apache ```dockerfile
COPY config/php.ini /usr/local/etc/php FROM php:5.6-apache
COPY src/ /var/www/html/ COPY config/php.ini /usr/local/etc/php
COPY src/ /var/www/html/
```
Where `src/` is the directory containing all your php code and `config/` contains your `php.ini` file. Where `src/` is the directory containing all your php code and `config/` contains your `php.ini` file.
@ -58,17 +70,19 @@ We provide two convenient scripts named `docker-php-ext-configure` and `docker-p
For example, if you want to have a PHP-FPM image with `iconv`, `mcrypt` and `gd` extensions, you can inherit the base image that you like, and write your own `Dockerfile` like this: For example, if you want to have a PHP-FPM image with `iconv`, `mcrypt` and `gd` extensions, you can inherit the base image that you like, and write your own `Dockerfile` like this:
FROM php:5.6-fpm ```dockerfile
# Install modules FROM php:5.6-fpm
RUN apt-get update && apt-get install -y \ # Install modules
libfreetype6-dev \ RUN apt-get update && apt-get install -y \
libjpeg62-turbo-dev \ libfreetype6-dev \
libmcrypt-dev \ libjpeg62-turbo-dev \
libpng12-dev \ libmcrypt-dev \
&& docker-php-ext-install iconv mcrypt \ libpng12-dev \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \ && docker-php-ext-install iconv mcrypt \
&& docker-php-ext-install gd && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
CMD ["php-fpm"] && docker-php-ext-install gd
CMD ["php-fpm"]
```
Remember, you must install dependencies for your extensions manually. If an extension needs custom `configure` arguments, you can use the `docker-php-ext-configure` script like this example. Remember, you must install dependencies for your extensions manually. If an extension needs custom `configure` arguments, you can use the `docker-php-ext-configure` script like this example.
@ -76,4 +90,6 @@ Remember, you must install dependencies for your extensions manually. If an exte
If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following: If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
docker run -it --rm --name my-apache-php-app -v "$PWD":/var/www/html php:5.6-apache ```console
$ docker run -it --rm --name my-apache-php-app -v "$PWD":/var/www/html php:5.6-apache
```

View File

@ -12,7 +12,9 @@ PostgreSQL implements the majority of the SQL:2011 standard, is ACID-compliant a
## start a postgres instance ## start a postgres instance
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres ```console
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
```
This image includes `EXPOSE 5432` (the postgres port), so standard container linking will make it automatically available to the linked containers. The default `postgres` user and database are created in the entrypoint with `initdb`. This image includes `EXPOSE 5432` (the postgres port), so standard container linking will make it automatically available to the linked containers. The default `postgres` user and database are created in the entrypoint with `initdb`.
@ -21,11 +23,15 @@ This image includes `EXPOSE 5432` (the postgres port), so standard container lin
## connect to it from an application ## connect to it from an application
docker run --name some-app --link some-postgres:postgres -d application-that-uses-postgres ```console
$ docker run --name some-app --link some-postgres:postgres -d application-that-uses-postgres
```
## ... or via `psql` ## ... or via `psql`
docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres' ```console
$ docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
```
## Environment Variables ## Environment Variables
@ -49,9 +55,11 @@ If you would like to do additional initialization in an image derived from this
You can also extend the image with a simple `Dockerfile` to set the locale. The following example will set the default locale to `de_DE.utf8`: You can also extend the image with a simple `Dockerfile` to set the locale. The following example will set the default locale to `de_DE.utf8`:
FROM postgres:9.4 ```dockerfile
RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8 FROM postgres:9.4
ENV LANG de_DE.utf8 RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
ENV LANG de_DE.utf8
```
Since database initialization only happens on container startup, this allows us to set the language before it is created. Since database initialization only happens on container startup, this allows us to set the language before it is created.

View File

@ -12,27 +12,37 @@ PyPy started out as a Python interpreter written in the Python language itself.
## Create a `Dockerfile` in your Python app project ## Create a `Dockerfile` in your Python app project
FROM pypy:3-onbuild ```dockerfile
CMD [ "pypy3", "./your-daemon-or-script.py" ] FROM pypy:3-onbuild
CMD [ "pypy3", "./your-daemon-or-script.py" ]
```
or (if you need to use PyPy 2): or (if you need to use PyPy 2):
FROM pypy:2-onbuild ```dockerfile
CMD [ "pypy", "./your-daemon-or-script.py" ] FROM pypy:2-onbuild
CMD [ "pypy", "./your-daemon-or-script.py" ]
```
These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file,`RUN pip install` on said file, and then copy the current directory into`/usr/src/app`. These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file,`RUN pip install` on said file, and then copy the current directory into`/usr/src/app`.
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-python-app . ```console
docker run -it --rm --name my-running-app my-python-app $ docker build -t my-python-app .
$ docker run -it --rm --name my-running-app my-python-app
```
## Run a single Python script ## Run a single Python script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:3 pypy3 your-daemon-or-script.py ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:3 pypy3 your-daemon-or-script.py
```
or (again, if you need to use Python 2): or (again, if you need to use Python 2):
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:2 pypy your-daemon-or-script.py ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:2 pypy your-daemon-or-script.py
```

View File

@ -10,27 +10,37 @@ Python is an interpreted, interactive, object-oriented, open-source programming
## Create a `Dockerfile` in your Python app project ## Create a `Dockerfile` in your Python app project
FROM python:3-onbuild ```dockerfile
CMD [ "python", "./your-daemon-or-script.py" ] FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
```
or (if you need to use Python 2): or (if you need to use Python 2):
FROM python:2-onbuild ```dockerfile
CMD [ "python", "./your-daemon-or-script.py" ] FROM python:2-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
```
These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file, `RUN pip install` on said file, and then copy the current directory into `/usr/src/app`. These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file, `RUN pip install` on said file, and then copy the current directory into `/usr/src/app`.
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-python-app . ```console
docker run -it --rm --name my-running-app my-python-app $ docker build -t my-python-app .
$ docker run -it --rm --name my-running-app my-python-app
```
## Run a single Python script ## Run a single Python script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python your-daemon-or-script.py ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python your-daemon-or-script.py
```
or (again, if you need to use Python 2): or (again, if you need to use Python 2):
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2 python your-daemon-or-script.py ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2 python your-daemon-or-script.py
```

View File

@ -18,35 +18,47 @@ R is a GNU project. The source code for the R software environment is written pr
Launch R directly for interactive work: Launch R directly for interactive work:
docker run -ti --rm r-base ```console
$ docker run -ti --rm r-base
```
## Batch mode ## Batch mode
Link the working directory to run R batch commands. We recommend specifying a non-root user when linking a volume to the container to avoid permission changes, as illustrated here: Link the working directory to run R batch commands. We recommend specifying a non-root user when linking a volume to the container to avoid permission changes, as illustrated here:
docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker r-base R CMD check . ```console
$ docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker r-base R CMD check .
```
Alternatively, just run a bash session on the container first. This allows a user to run batch commands and also edit and run scripts: Alternatively, just run a bash session on the container first. This allows a user to run batch commands and also edit and run scripts:
docker run -ti --rm r-base /usr/bin/bash ```console
vim.tiny myscript.R $ docker run -ti --rm r-base /usr/bin/bash
$ vim.tiny myscript.R
```
Write the script in the container, exit `vim` and run `Rscript` Write the script in the container, exit `vim` and run `Rscript`
Rscript myscript.R ```console
$ Rscript myscript.R
```
## Dockerfiles ## Dockerfiles
Use `r-base` as a base for your own Dockerfiles. For instance, something along the lines of the following will compile and run your project: Use `r-base` as a base for your own Dockerfiles. For instance, something along the lines of the following will compile and run your project:
FROM r-base:latest ```dockerfile
COPY . /usr/local/src/myscripts FROM r-base:latest
WORKDIR /usr/local/src/myscripts COPY . /usr/local/src/myscripts
CMD ["Rscript", "myscript.R"] WORKDIR /usr/local/src/myscripts
CMD ["Rscript", "myscript.R"]
```
Build your image with the command: Build your image with the command:
docker build -t myscript /path/to/Dockerfile ```console
$ docker build -t myscript /path/to/Dockerfile
```
Running this container with no command will execute the script. Alternatively, a user could run this container in interactive or batch mode as described above, instead of linking volumes. Running this container with no command will execute the script. Alternatively, a user could run this container in interactive or batch mode as described above, instead of linking volumes.

View File

@ -12,7 +12,9 @@ RabbitMQ is open source message broker software (sometimes called message-orient
One of the important things to note about RabbitMQ is that it stores data based on what it calls the "Node Name", which defaults to the hostname. What this means for usage in Docker is that we should specify `-h`/`--hostname` explicitly for each daemon so that we don't get a random hostname and can keep track of our data: One of the important things to note about RabbitMQ is that it stores data based on what it calls the "Node Name", which defaults to the hostname. What this means for usage in Docker is that we should specify `-h`/`--hostname` explicitly for each daemon so that we don't get a random hostname and can keep track of our data:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3 ```console
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
```
If you give that a minute, then do `docker logs some-rabbit`, you'll see in the output a block similar to: If you give that a minute, then do `docker logs some-rabbit`, you'll see in the output a block similar to:
@ -33,31 +35,41 @@ See the [RabbitMQ "Clustering Guide"](https://www.rabbitmq.com/clustering.html#e
For setting a consistent cookie (especially useful for clustering but also for remote/cross-container administration via `rabbitmqctl`), use `RABBITMQ_ERLANG_COOKIE`: For setting a consistent cookie (especially useful for clustering but also for remote/cross-container administration via `rabbitmqctl`), use `RABBITMQ_ERLANG_COOKIE`:
docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3 ```console
$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3
```
This can then be used from a separate instance to connect: This can then be used from a separate instance to connect:
$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3 bash ```console
root@f2a2d3d27c75:/# rabbitmqctl -n rabbit@my-rabbit list_users $ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3 bash
Listing users ... root@f2a2d3d27c75:/# rabbitmqctl -n rabbit@my-rabbit list_users
guest [administrator] Listing users ...
guest [administrator]
```
Alternatively, one can also use `RABBITMQ_NODENAME` to make repeated `rabbitmqctl` invocations simpler: Alternatively, one can also use `RABBITMQ_NODENAME` to make repeated `rabbitmqctl` invocations simpler:
$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit rabbitmq:3 bash ```console
root@f2a2d3d27c75:/# rabbitmqctl list_users $ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit rabbitmq:3 bash
Listing users ... root@f2a2d3d27c75:/# rabbitmqctl list_users
guest [administrator] Listing users ...
guest [administrator]
```
### Management Plugin ### Management Plugin
There is a second set of tags provided with the [management plugin](https://www.rabbitmq.com/management.html) installed and enabled by default, which is available on the standard management port of 15672, with the default username and password of `guest` / `guest`: There is a second set of tags provided with the [management plugin](https://www.rabbitmq.com/management.html) installed and enabled by default, which is available on the standard management port of 15672, with the default username and password of `guest` / `guest`:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management ```console
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
```
You can access it by visiting `http://container-ip:15672` in a browser or, if you need access outside the host, on port 8080: You can access it by visiting `http://container-ip:15672` in a browser or, if you need access outside the host, on port 8080:
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management ```console
$ docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
```
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser. You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser.
@ -65,7 +77,9 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
If you wish to change the default username and password of `guest` / `guest`, you can do so with the `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` environmental variables: If you wish to change the default username and password of `guest` / `guest`, you can do so with the `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` environmental variables:
docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management ```console
$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management
```
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser and use `user`/`password` to gain access to the management console You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser and use `user`/`password` to gain access to the management console
@ -73,8 +87,12 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
If you wish to change the default vhost, you can do so wiht the `RABBITMQ_DEFAULT_VHOST` environmental variables: If you wish to change the default vhost, you can do so wiht the `RABBITMQ_DEFAULT_VHOST` environmental variables:
docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_VHOST=my_vhost rabbitmq:3-management ```console
$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_VHOST=my_vhost rabbitmq:3-management
```
## Connecting to the daemon ## Connecting to the daemon
docker run --name some-app --link some-rabbit:rabbit -d application-that-uses-rabbitmq ```console
$ docker run --name some-app --link some-rabbit:rabbit -d application-that-uses-rabbitmq
```

View File

@ -10,7 +10,9 @@ Ruby on Rails or, simply, Rails is an open source web application framework whic
## Create a `Dockerfile` in your Rails app project ## Create a `Dockerfile` in your Rails app project
FROM rails:onbuild ```dockerfile
FROM rails:onbuild
```
Put this file in the root of your app, next to the `Gemfile`. Put this file in the root of your app, next to the `Gemfile`.
@ -18,12 +20,16 @@ This image includes multiple `ONBUILD` triggers which should cover most applicat
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-rails-app . ```console
docker run --name some-rails-app -d my-rails-app $ docker build -t my-rails-app .
$ docker run --name some-rails-app -d my-rails-app
```
You can test it by visiting `http://container-ip:3000` in a browser or, if you need access outside the host, on port 8080: You can test it by visiting `http://container-ip:3000` in a browser or, if you need access outside the host, on port 8080:
docker run --name some-rails-app -p 8080:3000 -d my-rails-app ```console
$ docker run --name some-rails-app -p 8080:3000 -d my-rails-app
```
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser. You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser.
@ -32,12 +38,16 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker
run` will help you generate one. Run it in the root of your app, next to the `Gemfile`: run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install ```console
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
```
## Bootstrap a new Rails application ## Bootstrap a new Rails application
If you want to generate the scaffolding for a new Rails project, you can do the following: If you want to generate the scaffolding for a new Rails project, you can do the following:
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app rails rails new webapp ```console
$ docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app rails rails new webapp
```
This will create a sub-directory named `webapp` inside your current directory. This will create a sub-directory named `webapp` inside your current directory.

View File

@ -18,13 +18,17 @@ Perl 6 Language Documentation: [http://doc.perl6.org/](http://doc.perl6.org/)
Simply running a container with the image will launch a Perl 6 REPL: Simply running a container with the image will launch a Perl 6 REPL:
$ docker run -it rakudo-star ```console
> say 'Hello, Perl!' $ docker run -it rakudo-star
Hello, Perl! > say 'Hello, Perl!'
Hello, Perl!
```
You can also provide perl6 command line switches to `docker run`: You can also provide perl6 command line switches to `docker run`:
$ docker run -it rakudo-star -e 'say "Hello!"' ```console
$ docker run -it rakudo-star -e 'say "Hello!"'
```
# Contributing/Getting Help # Contributing/Getting Help

View File

@ -10,13 +10,17 @@ Redis is an open-source, networked, in-memory, key-value data store with optiona
## start a redis instance ## start a redis instance
docker run --name some-redis -d redis ```console
$ docker run --name some-redis -d redis
```
This image includes `EXPOSE 6379` (the redis port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate). This image includes `EXPOSE 6379` (the redis port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## start with persistent storage ## start with persistent storage
docker run --name some-redis -d redis redis-server --appendonly yes ```console
$ docker run --name some-redis -d redis redis-server --appendonly yes
```
If persistence is enabled, data is stored in the `VOLUME /data`, which can be used with `--volumes-from some-volume-container` or `-v /docker/host/dir:/data` (see [docs.docker volumes](http://docs.docker.com/userguide/dockervolumes/)). If persistence is enabled, data is stored in the `VOLUME /data`, which can be used with `--volumes-from some-volume-container` or `-v /docker/host/dir:/data` (see [docs.docker volumes](http://docs.docker.com/userguide/dockervolumes/)).
@ -24,23 +28,31 @@ For more about Redis Persistence, see [http://redis.io/topics/persistence](http:
## connect to it from an application ## connect to it from an application
docker run --name some-app --link some-redis:redis -d application-that-uses-redis ```console
$ docker run --name some-app --link some-redis:redis -d application-that-uses-redis
```
## ... or via `redis-cli` ## ... or via `redis-cli`
docker run -it --link some-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"' ```console
$ docker run -it --link some-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
```
## Additionally, If you want to use your own redis.conf ... ## Additionally, If you want to use your own redis.conf ...
You can create your own Dockerfile that adds a redis.conf from the context into /data/, like so. You can create your own Dockerfile that adds a redis.conf from the context into /data/, like so.
FROM redis ```dockerfile
COPY redis.conf /usr/local/etc/redis/redis.conf FROM redis
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ] COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
```
Alternatively, you can specify something along the same lines with `docker run` options. Alternatively, you can specify something along the same lines with `docker run` options.
docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf ```console
$ docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
```
Where `/myredis/conf/` is a local directory containing your `redis.conf` file. Using this method means that there is no need for you to have a Dockerfile for your redis container. Where `/myredis/conf/` is a local directory containing your `redis.conf` file. Using this method means that there is no need for you to have a Dockerfile for your redis container.

View File

@ -12,7 +12,9 @@ Redmine is a free and open source, web-based project management and issue tracki
This is the simplest setup; just run redmine. This is the simplest setup; just run redmine.
docker run -d --name some-redmine redmine ```console
$ docker run -d --name some-redmine redmine
```
> not for multi-user production use ([redmine wiki](http://www.redmine.org/projects/redmine/wiki/RedmineInstall#Supported-database-back-ends)) > not for multi-user production use ([redmine wiki](http://www.redmine.org/projects/redmine/wiki/RedmineInstall#Supported-database-back-ends))
@ -24,15 +26,21 @@ Running Redmine with a database server is the recommened way.
- PostgreSQL - PostgreSQL
docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=redmine postgres ```console
$ docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=redmine postgres
```
- MySQL (replace `--link some-postgres:postgres` with `--link some-mysql:mysql` when running redmine) - MySQL (replace `--link some-postgres:postgres` with `--link some-mysql:mysql` when running redmine)
docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=redmine mysql ```console
$ docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=redmine mysql
```
2. start redmine 2. start redmine
docker run -d --name some-%%REPO%% --link some-postgres:postgres %%REPO%% ```console
$ docker run -d --name some-%%REPO%% --link some-postgres:postgres %%REPO%%
```
## Alternative Web Server ## Alternative Web Server
@ -54,13 +62,17 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`. 1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `%%REPO%%` container like this: 2. Start your `%%REPO%%` container like this:
docker run -d --name some-%%REPO%% -v /my/own/datadir:/usr/src/redmine/files --link some-postgres:postgres %%REPO%% ```console
$ docker run -d --name some-%%REPO%% -v /my/own/datadir:/usr/src/redmine/files --link some-postgres:postgres %%REPO%%
```
The `-v /my/own/datadir:/usr/src/redmine/files` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/usr/src/redmine/files` inside the container, where Redmine will store uploaded files. The `-v /my/own/datadir:/usr/src/redmine/files` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/usr/src/redmine/files` inside the container, where Redmine will store uploaded files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it: Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir ```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```
## Port Mapping ## Port Mapping

View File

@ -15,14 +15,16 @@ Older tags refer to the [deprecated registry](https://github.com/docker/docker-r
### Recommended: run the registry docker container ### Recommended: run the registry docker container
docker run \ ```console
-e SETTINGS_FLAVOR=s3 \ $ docker run \
-e AWS_BUCKET=acme-docker \ -e SETTINGS_FLAVOR=s3 \
-e STORAGE_PATH=/registry \ -e AWS_BUCKET=acme-docker \
-e AWS_KEY=AKIAHSHB43HS3J92MXZ \ -e STORAGE_PATH=/registry \
-e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \ -e AWS_KEY=AKIAHSHB43HS3J92MXZ \
-e SEARCH_BACKEND=sqlalchemy \ -e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \
-p 5000:5000 \ -e SEARCH_BACKEND=sqlalchemy \
registry -p 5000:5000 \
registry
```
NOTE: The container will try to allocate the port 5000. If the port is already taken, find out which container is already using it by running `docker ps`. NOTE: The container will try to allocate the port 5000. If the port is already taken, find out which container is already using it by running `docker ps`.

View File

@ -10,14 +10,18 @@ The Robot Operating System (ROS) is a set of software libraries and tools that h
## Create a `Dockerfile` in your ROS app project ## Create a `Dockerfile` in your ROS app project
FROM ros:indigo ```dockerfile
# place here your application's setup specifics FROM ros:indigo
CMD [ "roslaunch", "my-ros-app my-ros-app.launch" ] # place here your application's setup specifics
CMD [ "roslaunch", "my-ros-app my-ros-app.launch" ]
```
You can then build and run the Docker image: You can then build and run the Docker image:
docker build -t my-ros-app . ```console
docker run -it --rm --name my-running-app my-ros-app $ docker build -t my-ros-app .
$ docker run -it --rm --name my-running-app my-ros-app
```
## Deployment use cases ## Deployment use cases
@ -43,7 +47,9 @@ ROS uses the `~/.ros/` directory for storing logs, and debugging info. If you wi
For example, if one wishes to use their own `.ros` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument: For example, if one wishes to use their own `.ros` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument:
docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros ```console
$ docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros
```
### Devices ### Devices
@ -62,16 +68,20 @@ If we want our all ROS nodes to easily talk to each other, we'll can use a virtu
> Build a ROS image that includes ROS tutorials using this `Dockerfile:` > Build a ROS image that includes ROS tutorials using this `Dockerfile:`
FROM ros:indigo-ros-base ```dockerfile
# install ros tutorials packages FROM ros:indigo-ros-base
RUN apt-get update && apt-get install -y # install ros tutorials packages
ros-indigo-ros-tutorials \ RUN apt-get update && apt-get install -y
ros-indigo-common-tutorials \ ros-indigo-ros-tutorials \
&& rm -rf /var/lib/apt/lists/ ros-indigo-common-tutorials \
&& rm -rf /var/lib/apt/lists/
```
> Then to build the image from within the same directory: > Then to build the image from within the same directory:
docker build --tag ros:ros-tutorials . ```console
$ docker build --tag ros:ros-tutorials .
```
#### Create network #### Create network
@ -85,68 +95,84 @@ If we want our all ROS nodes to easily talk to each other, we'll can use a virtu
> To create a container for the ROS master and advertise it's service: > To create a container for the ROS master and advertise it's service:
docker run -it --rm\ ```console
--publish-service=master.foo \ $ docker run -it --rm\
--name master \ --publish-service=master.foo \
ros:ros-tutorials \ --name master \
roscore ros:ros-tutorials \
roscore
```
> Now you can see that master is running and is ready manage our other ROS nodes. To add our `talker` node, we'll need to point the relevant environment variable to the master service: > Now you can see that master is running and is ready manage our other ROS nodes. To add our `talker` node, we'll need to point the relevant environment variable to the master service:
docker run -it --rm\ ```console
--publish-service=talker.foo \ $ docker run -it --rm\
--env ROS_HOSTNAME=talker \ --publish-service=talker.foo \
--env ROS_MASTER_URI=http://master:11311 \ --env ROS_HOSTNAME=talker \
--name talker \ --env ROS_MASTER_URI=http://master:11311 \
ros:ros-tutorials \ --name talker \
rosrun roscpp_tutorials talker ros:ros-tutorials \
rosrun roscpp_tutorials talker
```
> Then in another terminal, run the `listener` node similarly: > Then in another terminal, run the `listener` node similarly:
docker run -it --rm\ ```console
--publish-service=listener.foo \ $ docker run -it --rm\
--env ROS_HOSTNAME=listener \ --publish-service=listener.foo \
--env ROS_MASTER_URI=http://master:11311 \ --env ROS_HOSTNAME=listener \
--name listener \ --env ROS_MASTER_URI=http://master:11311 \
ros:ros-tutorials \ --name listener \
rosrun roscpp_tutorials listener ros:ros-tutorials \
rosrun roscpp_tutorials listener
```
> Alright! You should see `listener` is now echoing each message the `talker` broadcasting. You can then list the containers and see something like this: > Alright! You should see `listener` is now echoing each message the `talker` broadcasting. You can then list the containers and see something like this:
$ docker service ls ```console
SERVICE ID NAME NETWORK CONTAINER $ docker service ls
67ce73355e67 listener foo a62019123321 SERVICE ID NAME NETWORK CONTAINER
917ee622d295 master foo f6ab9155fdbe 67ce73355e67 listener foo a62019123321
7f5a4748fb8d talker foo e0da2ee7570a 917ee622d295 master foo f6ab9155fdbe
7f5a4748fb8d talker foo e0da2ee7570a
```
> And for the services: > And for the services:
$ docker ps ```console
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES $ docker ps
a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener
f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker
f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master
```
#### Introspection #### Introspection
> Ok, now that we see the two nodes are communicating, let get inside one of the containers and do some introspection what exactly the topics are: > Ok, now that we see the two nodes are communicating, let get inside one of the containers and do some introspection what exactly the topics are:
docker exec -it master bash ```console
source /ros_entrypoint.sh $ docker exec -it master bash
$ source /ros_entrypoint.sh
```
> If we then use `rostopic` to list published message topics, we should see something like this: > If we then use `rostopic` to list published message topics, we should see something like this:
$ rostopic list ```console
/chatter $ rostopic list
/rosout /chatter
/rosout_agg /rosout
/rosout_agg
```
#### Tear down #### Tear down
> To tear down the structure we've made, we just need to stop the containers and the services. We can stop and remove the containers using `Ctrl^C` where we launched the containers or using the stop command with the names we gave them: > To tear down the structure we've made, we just need to stop the containers and the services. We can stop and remove the containers using `Ctrl^C` where we launched the containers or using the stop command with the names we gave them:
docker stop master talker listener ```console
docker rm master talker listener $ docker stop master talker listener
$ docker rm master talker listener
```
# More Resources # More Resources

View File

@ -10,8 +10,10 @@ Ruby is a dynamic, reflective, object-oriented, general-purpose, open-source pro
## Create a `Dockerfile` in your Ruby app project ## Create a `Dockerfile` in your Ruby app project
FROM ruby:2.1-onbuild ```dockerfile
CMD ["./your-daemon-or-script.rb"] FROM ruby:2.1-onbuild
CMD ["./your-daemon-or-script.rb"]
```
Put this file in the root of your app, next to the `Gemfile`. Put this file in the root of your app, next to the `Gemfile`.
@ -20,17 +22,23 @@ bundle install`.
You can then build and run the Ruby image: You can then build and run the Ruby image:
docker build -t my-ruby-app . ```console
docker run -it --name my-running-script my-ruby-app $ docker build -t my-ruby-app .
$ docker run -it --name my-running-script my-ruby-app
```
### Generate a `Gemfile.lock` ### Generate a `Gemfile.lock`
The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`: The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install ```console
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
```
## Run a single Ruby script ## Run a single Ruby script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly: For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb ```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb
```

View File

@ -12,25 +12,35 @@ Sentry is a realtime event logging and aggregation platform. It specializes in m
1. start a redis container 1. start a redis container
docker run -d --name some-redis redis ```console
$ docker run -d --name some-redis redis
```
2. start a database container: 2. start a database container:
- Postgres (recommended by upstream): - Postgres (recommended by upstream):
docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=sentry postgres ```console
$ docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=sentry postgres
```
- MySQL (later steps assume PostgreSQL, replace the `--link some-postgres:postres` with `--link some-mysql:mysql`): - MySQL (later steps assume PostgreSQL, replace the `--link some-postgres:postres` with `--link some-mysql:mysql`):
docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=sentry mysql ```console
$ docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=sentry mysql
```
3. now start up sentry server 3. now start up sentry server
docker run -d --name some-sentry --link some-redis:redis --link some-postgres:postgres sentry ```console
$ docker run -d --name some-sentry --link some-redis:redis --link some-postgres:postgres sentry
```
4. if this is a new database, you'll need to run `sentry upgrade` 4. if this is a new database, you'll need to run `sentry upgrade`
docker run -it --rm --link some-postgres:postgres --link some-redis:redis sentry sentry upgrade ```console
$ docker run -it --rm --link some-postgres:postgres --link some-redis:redis sentry sentry upgrade
```
**Note: the `-it` is important as the initial upgrade will prompt to create an initial user and will fail without it** **Note: the `-it` is important as the initial upgrade will prompt to create an initial user and will fail without it**
@ -38,13 +48,17 @@ Sentry is a realtime event logging and aggregation platform. It specializes in m
- using the celery image: - using the celery image:
docker run -d --name celery-beat --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery celery beat ```console
docker run -d --name celery-worker1 --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery $ docker run -d --name celery-beat --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery celery beat
$ docker run -d --name celery-worker1 --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery
```
- using the celery bundled with sentry - using the celery bundled with sentry
docker run -d --name sentry-celery-beat --link some-redis:redis sentry sentry celery beat ```console
docker run -d --name sentry-celery1 --link some-redis:redis sentry sentry celery worker $ docker run -d --name sentry-celery-beat --link some-redis:redis sentry sentry celery beat
$ docker run -d --name sentry-celery1 --link some-redis:redis sentry sentry celery worker
```
### port mapping ### port mapping
@ -54,4 +68,6 @@ If you'd like to be able to access the instance from the host without the contai
If you did not create a superuser during `sentry upgrade`, use the following to create one: If you did not create a superuser during `sentry upgrade`, use the following to create one:
docker run -it --rm --link some-postgres:postgres sentry sentry createsuperuser ```console
$ docker run -it --rm --link some-postgres:postgres sentry sentry createsuperuser
```

View File

@ -12,15 +12,19 @@ SonarQube is an open source platform for continuous inspection of code quality.
The server is started this way: The server is started this way:
docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube:5.1 ```console
$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube:5.1
```
To analyse a project: To analyse a project:
$ On Linux: ```console
mvn sonar:sonar $ On Linux:
mvn sonar:sonar
$ With boot2docker:
mvn sonar:sonar -Dsonar.host.url=http://$(boot2docker ip):9000 -Dsonar.jdbc.url="jdbc:h2:tcp://$(boot2docker ip)/sonar" $ With boot2docker:
mvn sonar:sonar -Dsonar.host.url=http://$(boot2docker ip):9000 -Dsonar.jdbc.url="jdbc:h2:tcp://$(boot2docker ip)/sonar"
```
## Database configuration ## Database configuration
@ -28,12 +32,14 @@ By default, the image will use an embedded H2 database that is not suited for pr
The production database is configured with these variables: `SONARQUBE_JDBC_USERNAME`, `SONARQUBE_JDBC_PASSWORD` and `SONARQUBE_JDBC_URL`. The production database is configured with these variables: `SONARQUBE_JDBC_USERNAME`, `SONARQUBE_JDBC_PASSWORD` and `SONARQUBE_JDBC_URL`.
docker run -d --name sonarqube \ ```console
$ docker run -d --name sonarqube \
-p 9000:9000 -p 9092:9092 \ -p 9000:9000 -p 9092:9092 \
-e SONARQUBE_JDBC_USERNAME=sonar \ -e SONARQUBE_JDBC_USERNAME=sonar \
-e SONARQUBE_JDBC_PASSWORD=sonar \ -e SONARQUBE_JDBC_PASSWORD=sonar \
-e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost/sonar \ -e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost/sonar \
sonarqube:5.1 sonarqube:5.1
```
More recipes can be found [here](https://github.com/SonarSource/docker-sonarqube/blob/master/recipes.md). More recipes can be found [here](https://github.com/SonarSource/docker-sonarqube/blob/master/recipes.md).

View File

@ -8,6 +8,8 @@ Read more about [Thrift](https://thrift.apache.org).
This is image is intended to run as an executable. Files are provided by mounting a directory. Here's an example of compiling `service.thrift` to ruby to the current directory. This is image is intended to run as an executable. Files are provided by mounting a directory. Here's an example of compiling `service.thrift` to ruby to the current directory.
docker run -v "$PWD:/data" thrift thrift -o /data --gen rb /data/service.thrift ```console
$ docker run -v "$PWD:/data" thrift thrift -o /data --gen rb /data/service.thrift
```
Note, that you may want to include `-u $(id -u)` to set the UID on generated files. The thrift process runs as root by default which will generate root owned files depending on your docker setup. Note, that you may want to include `-u $(id -u)` to set the UID on generated files. The thrift process runs as root by default which will generate root owned files depending on your docker setup.

View File

@ -10,11 +10,15 @@ Apache Tomcat (or simply Tomcat) is an open source web server and servlet contai
Run the default Tomcat server (`CMD ["catalina.sh", "run"]`): Run the default Tomcat server (`CMD ["catalina.sh", "run"]`):
docker run -it --rm tomcat:8.0 ```console
$ docker run -it --rm tomcat:8.0
```
You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888: You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888:
docker run -it --rm -p 8888:8080 tomcat:8.0 ```console
$ docker run -it --rm -p 8888:8080 tomcat:8.0
```
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser. You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser.

View File

@ -14,42 +14,46 @@ Development of Ubuntu is led by UK-based Canonical Ltd., a company owned by Sout
### `ubuntu:14.04` ### `ubuntu:14.04`
$ docker run ubuntu:14.04 grep -v '^#' /etc/apt/sources.list ```console
$ docker run ubuntu:14.04 grep -v '^#' /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ trusty main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted deb http://archive.ubuntu.com/ubuntu/ trusty main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty universe deb http://archive.ubuntu.com/ubuntu/ trusty universe
deb http://archive.ubuntu.com/ubuntu/ trusty-updates universe deb-src http://archive.ubuntu.com/ubuntu/ trusty universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates universe deb http://archive.ubuntu.com/ubuntu/ trusty-updates universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates universe
deb http://archive.ubuntu.com/ubuntu/ trusty-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-security main restricted deb http://archive.ubuntu.com/ubuntu/ trusty-security main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty-security universe deb-src http://archive.ubuntu.com/ubuntu/ trusty-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-security universe deb http://archive.ubuntu.com/ubuntu/ trusty-security universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty-security universe
```
### `ubuntu:12.04` ### `ubuntu:12.04`
$ docker run ubuntu:12.04 cat /etc/apt/sources.list ```console
$ docker run ubuntu:12.04 cat /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise main restricted deb http://archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise main restricted
deb http://archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates main restricted deb http://archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb http://archive.ubuntu.com/ubuntu/ precise universe
deb-src http://archive.ubuntu.com/ubuntu/ precise universe deb http://archive.ubuntu.com/ubuntu/ precise universe
deb http://archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://archive.ubuntu.com/ubuntu/ precise universe
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates universe deb http://archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates universe
deb http://archive.ubuntu.com/ubuntu/ precise-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-security main restricted deb http://archive.ubuntu.com/ubuntu/ precise-security main restricted
deb http://archive.ubuntu.com/ubuntu/ precise-security universe deb-src http://archive.ubuntu.com/ubuntu/ precise-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-security universe deb http://archive.ubuntu.com/ubuntu/ precise-security universe
deb-src http://archive.ubuntu.com/ubuntu/ precise-security universe
```

View File

@ -107,7 +107,7 @@ for repo in "${repos[@]}"; do
composeYml= composeYml=
if [ -f "$repo/docker-compose.yml" ]; then if [ -f "$repo/docker-compose.yml" ]; then
compose="$(cat "$repo/compose.md" 2>/dev/null || cat "$helperDir/compose.md")" compose="$(cat "$repo/compose.md" 2>/dev/null || cat "$helperDir/compose.md")"
composeYml="$(sed 's/^/\t/' "$repo/docker-compose.yml")" composeYml=$'```yaml\n'"$(cat "$repo/docker-compose.yml")"$'\n```'
fi fi
cp -v "$helperDir/template.md" "$repo/README.md" cp -v "$helperDir/template.md" "$repo/README.md"

View File

@ -8,13 +8,17 @@ In order to use the image, it is necessary to accept the terms of the WebSphere
The image is designed to support a number of different usage patterns. The following examples are based on the Liberty [application deployment sample](https://developer.ibm.com/wasdev/docs/article_appdeployment/) and assume that [DefaultServletEngine.zip](https://www.ibm.com/developerworks/mydeveloperworks/blogs/wasdev/resource/DefaultServletEngine.zip) has been extracted to `/tmp` and the `server.xml` updated to accept HTTP connections from outside of the container by adding the following element inside the `server` stanza: The image is designed to support a number of different usage patterns. The following examples are based on the Liberty [application deployment sample](https://developer.ibm.com/wasdev/docs/article_appdeployment/) and assume that [DefaultServletEngine.zip](https://www.ibm.com/developerworks/mydeveloperworks/blogs/wasdev/resource/DefaultServletEngine.zip) has been extracted to `/tmp` and the `server.xml` updated to accept HTTP connections from outside of the container by adding the following element inside the `server` stanza:
<httpEndpoint host="*" httpPort="9080" httpsPort="-1"/> ```xml
<httpEndpoint host="*" httpPort="9080" httpsPort="-1"/>
```
1. The image contains a default server configuration that specifies the `webProfile-6.0` feature and exposes ports 9080 and 9443 for HTTP and HTTPS respectively. A WAR file can therefore be mounted in to the `dropins` directory of this server and run. The following example starts a container in the background running a WAR file from the host file system with the HTTP and HTTPS ports mapped to 80 and 443 respectively. 1. The image contains a default server configuration that specifies the `webProfile-6.0` feature and exposes ports 9080 and 9443 for HTTP and HTTPS respectively. A WAR file can therefore be mounted in to the `dropins` directory of this server and run. The following example starts a container in the background running a WAR file from the host file system with the HTTP and HTTPS ports mapped to 80 and 443 respectively.
docker run -e LICENSE=accept -d -p 80:9080 -p 443:9443 \ ```console
-v /tmp/DefaultServletEngine/dropins/Sample1.war:/opt/ibm/wlp/usr/servers/defaultServer/dropins/Sample1.war \ $ docker run -e LICENSE=accept -d -p 80:9080 -p 443:9443 \
websphere-liberty -v /tmp/DefaultServletEngine/dropins/Sample1.war:/opt/ibm/wlp/usr/servers/defaultServer/dropins/Sample1.war \
websphere-liberty
```
Once the server has started, you can browse to http://localhost/Sample1/SimpleServlet on the Docker host. Once the server has started, you can browse to http://localhost/Sample1/SimpleServlet on the Docker host.
@ -22,36 +26,47 @@ The image is designed to support a number of different usage patterns. The follo
2. For greater flexibility over configuration, it is possible to mount an entire server configuration directory from the host and then specify the server name as a parameter to the run command. Note that this particular example server configuration only provides HTTP access. 2. For greater flexibility over configuration, it is possible to mount an entire server configuration directory from the host and then specify the server name as a parameter to the run command. Note that this particular example server configuration only provides HTTP access.
docker run -e LICENSE=accept -d -p 80:9080 \ ```console
-v /tmp/DefaultServletEngine:/opt/ibm/wlp/usr/servers/DefaultServletEngine \ $ docker run -e LICENSE=accept -d -p 80:9080 \
websphere-liberty /opt/ibm/wlp/bin/server run DefaultServletEngine -v /tmp/DefaultServletEngine:/opt/ibm/wlp/usr/servers/DefaultServletEngine \
websphere-liberty /opt/ibm/wlp/bin/server run DefaultServletEngine
```
3. It is also possible to build an application layer on top of this image using either the default server configuration or a new server configuration and, optionally, accept the license as part of that build. Here we have copied the `Sample1.war` from `/tmp/DefaultServletEngine/dropins` to the same directory as the following Dockerfile. 3. It is also possible to build an application layer on top of this image using either the default server configuration or a new server configuration and, optionally, accept the license as part of that build. Here we have copied the `Sample1.war` from `/tmp/DefaultServletEngine/dropins` to the same directory as the following Dockerfile.
FROM websphere-liberty ```dockerfile
ADD Sample1.war /opt/ibm/wlp/usr/servers/defaultServer/dropins/ FROM websphere-liberty
ENV LICENSE accept ADD Sample1.war /opt/ibm/wlp/usr/servers/defaultServer/dropins/
ENV LICENSE accept
```
This can then be built and run as follows: This can then be built and run as follows:
docker build -t app . ```console
docker run -d -p 80:9080 -p 443:9443 app $ docker build -t app .
$ docker run -d -p 80:9080 -p 443:9443 app
```
4. Lastly, it is possible to mount a data volume container containing the application and the server configuration on to the image. This has the benefit that it has no dependency on files from the host but still allows the application container to be easily re-mounted on a newer version of the application server image. The example assumes that you have copied the `/tmp/DefaultServletEngine` directory in to the same directory as the Dockerfile. 4. Lastly, it is possible to mount a data volume container containing the application and the server configuration on to the image. This has the benefit that it has no dependency on files from the host but still allows the application container to be easily re-mounted on a newer version of the application server image. The example assumes that you have copied the `/tmp/DefaultServletEngine` directory in to the same directory as the Dockerfile.
Build and run the data volume container: Build and run the data volume container:
FROM websphere-liberty ```dockerfile
ADD DefaultServletEngine /opt/ibm/wlp/usr/servers/DefaultServletEngine FROM websphere-liberty
ADD DefaultServletEngine /opt/ibm/wlp/usr/servers/DefaultServletEngine
```
docker build -t app-image .
docker run -d -v /opt/ibm/wlp/usr/servers/DefaultServletEngine \ ```console
--name app app-image true $ docker build -t app-image .
$ docker run -d -v /opt/ibm/wlp/usr/servers/DefaultServletEngine \
--name app app-image true
```
Run the WebSphere Liberty image with the volumes from the data volume container mounted: Run the WebSphere Liberty image with the volumes from the data volume container mounted:
docker run -e LICENSE=accept -d -p 80:9080 \ ```console
--volumes-from app websphere-liberty \ $ docker run -e LICENSE=accept -d -p 80:9080 \
/opt/ibm/wlp/bin/server run DefaultServletEngine --volumes-from app websphere-liberty \
/opt/ibm/wlp/bin/server run DefaultServletEngine
```

View File

@ -8,7 +8,9 @@ WordPress is a free and open source blogging tool and a content management syste
# How to use this image # How to use this image
docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%% ```console
$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
```
The following environment variables are also honored for configuring your WordPress instance: The following environment variables are also honored for configuring your WordPress instance:
@ -22,14 +24,18 @@ If the `WORDPRESS_DB_NAME` specified does not already exist on the given MySQL s
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used: If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%REPO%% ```console
$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%REPO%%
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser. Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
If you'd like to use an external database instead of a linked `mysql` container, specify the hostname and port with `WORDPRESS_DB_HOST` along with the password in `WORDPRESS_DB_PASSWORD` and the username in `WORDPRESS_DB_USER` (if it is something other than `root`): If you'd like to use an external database instead of a linked `mysql` container, specify the hostname and port with `WORDPRESS_DB_HOST` along with the password in `WORDPRESS_DB_PASSWORD` and the username in `WORDPRESS_DB_USER` (if it is something other than `root`):
docker run --name some-%%REPO%% -e WORDPRESS_DB_HOST=10.1.2.3:3306 \ ```console
-e WORDPRESS_DB_USER=... -e WORDPRESS_DB_PASSWORD=... -d %%REPO%% $ docker run --name some-%%REPO%% -e WORDPRESS_DB_HOST=10.1.2.3:3306 \
-e WORDPRESS_DB_USER=... -e WORDPRESS_DB_PASSWORD=... -d %%REPO%%
```
## %%COMPOSE%% ## %%COMPOSE%%