Run update.sh

This commit is contained in:
Docker Library Bot 2015-08-13 13:07:10 -07:00
parent fd6ae7a5b7
commit 95fbe35fd0
70 changed files with 1287 additions and 668 deletions

View File

@ -16,7 +16,9 @@ Documentation for Aerospike is available at [http://aerospike.com/docs](https://
The following will run `asd` with all the exposed ports forward to the host machine.
docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
```console
$ docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
```
**NOTE** Although this is the simplest method to getting Aerospike up and running, but it is not the preferred method. To properly run the container, please specify an **custom configuration** with the **access-address** defined.
@ -36,7 +38,9 @@ This will use tell `asd` to use the file in `/opt/aerospike/etc/aerospike.conf`,
A full example:
docker run -d -v <DIRECTORY>:/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server asd --foreground --config-file /opt/aerospike/etc/aerospike.conf
```console
$ docker run -d -v <DIRECTORY>:/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server asd --foreground --config-file /opt/aerospike/etc/aerospike.conf
```
## access-address Configuration
@ -62,7 +66,9 @@ Where `<DIRECTORY>` is the path to a directory containing your data files.
A full example:
docker run -d -v <DIRECTORY>:/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
```console
$ docker run -d -v <DIRECTORY>:/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
```
## Clustering

View File

@ -20,17 +20,21 @@ For more information about this image and its history, please see the [relevant
Use like you would any other base image:
FROM alpine:3.1
RUN apk add --update mysql-client && rm -rf /var/cache/apk/*
ENTRYPOINT ["mysql"]
```dockerfile
FROM alpine:3.1
RUN apk add --update mysql-client && rm -rf /var/cache/apk/*
ENTRYPOINT ["mysql"]
```
This example has a virtual image size of only 16 MB. Compare that to our good friend Ubuntu:
FROM ubuntu:14.04
RUN apt-get update \
&& apt-get install -y mysql-client \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["mysql"]
```dockerfile
FROM ubuntu:14.04
RUN apt-get update \
&& apt-get install -y mysql-client \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["mysql"]
```
This yields us a virtual image size of about 232 MB image.

View File

@ -38,19 +38,25 @@ ArangoDB Documentation
In order to start an ArangoDB instance run
unix> docker run -d --name arangodb-instance arangodb
```console
$ docker run -d --name arangodb-instance arangodb
```
Will create and launch the arangodb docker instance as background process. The Identifier of the process is printed - the plain text name will be *arangodb-instance* as you stated above. By default ArangoDB listen on port 8529 for request and the image includes `EXPOST 8529`. If you link an application container it is automatically available in the linked container. See the following examples.
In order to get the IP arango listens on run:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' arangodb-instance
```console
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' arangodb-instance
```
### Using the instance
In order to use the running instance from an application, link the container
unix> docker run --name my-arangodb-app --link arangodb-instance:db-link arangodb
```console
$ docker run --name my-arangodb-app --link arangodb-instance:db-link arangodb
```
This will use the instance with the name `arangodb-instance` and link it into the application container. The application container will contain environment variables
@ -66,7 +72,9 @@ These can be used to access the database.
If you want to expose the port to the outside world, run
unix> docker run -p 8529:8529 -d arangodb
```console
$ docker run -p 8529:8529 -d arangodb
```
ArangoDB listen on port 8529 for request and the image includes `EXPOST 8529`. The `-p 8529:8529` exposes this port on the host.
@ -80,10 +88,12 @@ A good explanation about persistence and docker container can be found here: [Do
You can map the container's volumes to a directory on the host, so that the data is kept between runs of the container. This path `/tmp/arangodb` is in general not the correct place to store you persistent files - it is just an example!
unix> mkdir /tmp/arangodb
unix> docker run -p 8529:8529 -d \
-v /tmp/arangodb:/var/lib/arangodb \
arangodb
```console
$ mkdir /tmp/arangodb
$ docker run -p 8529:8529 -d \
-v /tmp/arangodb:/var/lib/arangodb \
arangodb
```
This will use the `/tmp/arangodb` directory of the host as database directory for ArangoDB inside the container.
@ -93,27 +103,37 @@ The ArangoDB startup configuration is specified in the file `/etc/arangodb/arang
If `/my/custom/arangod.conf` is the path of your arangodb configuration file, you can start your `arangodb` container like this:
docker run --name some-arangodb -v /my/custom:/etc/arangodb -d arangodb:tag
```console
$ docker run --name some-arangodb -v /my/custom:/etc/arangodb -d arangodb:tag
```
This will start a new container `some-arangodb` where the ArangoDB instance uses the startup settings from your config file instead of the default one.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it:
chcon -Rt svirt_sandbox_file_t /my/custom
```console
$ chcon -Rt svirt_sandbox_file_t /my/custom
```
### Using a data container
Alternatively you can create a container holding the data.
unix> docker run -d --name arangodb-persist -v /var/lib/arangodb debian:8.0 true
```console
$ docker run -d --name arangodb-persist -v /var/lib/arangodb debian:8.0 true
```
And use this data container in your ArangoDB container.
unix> docker run --volumes-from arangodb-persist -p 8529:8529 arangodb
```console
$ docker run --volumes-from arangodb-persist -p 8529:8529 arangodb
```
If want to save a few bytes you can alternatively use [hello-world](https://registry.hub.docker.com/_/hello-world/), [busybox](https://registry.hub.docker.com/_/busybox/) or [alpine](https://registry.hub.docker.com/_/alpine/) for creating the volume only containers. For example:
unix> docker run -d --name arangodb-persist -v /var/lib/arangodb alpine alpine
```console
$ docker run -d --name arangodb-persist -v /var/lib/arangodb alpine alpine
```
# License

View File

@ -21,15 +21,19 @@ BusyBox combines tiny versions of many common UNIX utilities into a single small
## Run BusyBox shell
docker run -it --rm busybox
```console
$ docker run -it --rm busybox
```
This will drop you into an `sh` shell to allow you to do what you want inside a BusyBox system.
## Create a `Dockerfile` for a binary
FROM busybox
COPY ./my-static-binary /my-static-binary
CMD ["/my-static-binary"]
```dockerfile
FROM busybox
COPY ./my-static-binary /my-static-binary
CMD ["/my-static-binary"]
```
This `Dockerfile` will allow you to create a minimal image for your statically compiled binary. You will have to compile the binary in some other place like another container.

View File

@ -20,7 +20,9 @@ Apache Cassandra is an open source distributed database management system design
Starting a Cassandra instance is simple:
docker run --name some-cassandra -d cassandra:tag
```console
$ docker run --name some-cassandra -d cassandra:tag
```
... where `some-cassandra` is the name you want to assign to your container and `tag` is the tag specifying the Cassandra version you want. See the list above for relevant tags.
@ -28,13 +30,17 @@ Starting a Cassandra instance is simple:
This image exposes the standard Cassandra ports (see the [Cassandra FAQ](https://wiki.apache.org/cassandra/FAQ#ports)), so container linking makes the Cassandra instance available to other application containers. Start your application container like this in order to link it to the Cassandra container:
docker run --name some-app --link some-cassandra:cassandra -d app-that-uses-cassandra
```console
$ docker run --name some-app --link some-cassandra:cassandra -d app-that-uses-cassandra
```
## Make a cluster
Using the environment variables documented below, there are two cluster scenarios: instances on the same machine and instances on separate machines. For the same machine, start the instance as described above. To start other instances, just tell each new node where the first is.
docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag
```console
$ docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag
```
... where `some-cassandra` is the name of your original Cassandra Server container, taking advantage of `docker inspect` to get the IP address of the other container.
@ -42,21 +48,29 @@ For separate machines (ie, two VMs on a cloud provider), you need to tell Cassan
Assuming the first machine's IP address is `10.42.42.42` and the second's is `10.43.43.43`, start the first with exposed gossip port:
docker run --name some-cassandra -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 cassandra:tag
```console
$ docker run --name some-cassandra -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 cassandra:tag
```
Then start a Cassandra container on the second machine, with the exposed gossip port and seed pointing to the first machine:
docker run --name some-cassandra -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 cassandra:tag
```console
$ docker run --name some-cassandra -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 cassandra:tag
```
## Connect to Cassandra from `cqlsh`
The following command starts another Cassandra container instance and runs `cqlsh` (Cassandra Query Language Shell) against your original Cassandra container, allowing you to execute CQL statements against your database instance:
docker run -it --link some-cassandra:cassandra --rm cassandra sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"'
```console
$ docker run -it --link some-cassandra:cassandra --rm cassandra sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"'
```
... or (simplified to take advantage of the `/etc/hosts` entry Docker adds for linked containers):
docker run -it --link some-cassandra:cassandra --rm cassandra cqlsh cassandra
```console
$ docker run -it --link some-cassandra:cassandra --rm cassandra cqlsh cassandra
```
... where `some-cassandra` is the name of your original Cassandra Server container.
@ -66,11 +80,15 @@ More information about the CQL can be found in the [Cassandra documentation](htt
The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `cassandra` container:
docker exec -it some-cassandra bash
```console
$ docker exec -it some-cassandra bash
```
The Cassandra Server log is available through Docker's container log:
docker logs some-cassandra
```console
$ docker logs some-cassandra
```
## Environment Variables
@ -124,13 +142,17 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `cassandra` container like this:
docker run --name some-cassandra -v /my/own/datadir:/var/lib/cassandra/data -d cassandra:tag
```console
$ docker run --name some-cassandra -v /my/own/datadir:/var/lib/cassandra/data -d cassandra:tag
```
The `-v /my/own/datadir:/var/lib/cassandra/data` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/cassandra/data` inside the container, where Cassandra by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir
```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```
## No connections until Cassandra init completes

View File

@ -14,19 +14,27 @@ Celery is an open source asynchronous task queue/job queue based on distributed
## start a celery worker (RabbitMQ Broker)
docker run --link some-rabbit:rabbit --name some-celery -d celery
```console
$ docker run --link some-rabbit:rabbit --name some-celery -d celery
```
### check the status of the cluster
docker run --link some-rabbit:rabbit --rm celery celery status
```console
$ docker run --link some-rabbit:rabbit --rm celery celery status
```
## start a celery worker (Redis Broker)
docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --name some-celery -d celery
```console
$ docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --name some-celery -d celery
```
### check the status of the cluster
docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --rm celery celery status
```console
$ docker run --link some-redis:redis -e CELERY_BROKER_URL=redis://redis --rm celery celery status
```
# Supported Docker versions

View File

@ -42,45 +42,55 @@ Currently, systemd in CentOS 7 has been removed and replaced with a `fakesystemd
## Dockerfile for systemd base image
FROM centos:7
MAINTAINER "you" <your@email.here>
ENV container docker
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i ==
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
```dockerfile
FROM centos:7
MAINTAINER "you" <your@email.here>
ENV container docker
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i ==
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
```
This Dockerfile swaps out fakesystemd for the real package, but deletes a number of unit files which might cause issues. From here, you are ready to build your base image.
docker build --rm -t local/c7-systemd .
```console
$ docker build --rm -t local/c7-systemd .
```
## Example systemd enabled app container
In order to use the systemd enabled base container created above, you will need to create your `Dockerfile` similar to the one below.
FROM local/c7-systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
```dockerfile
FROM local/c7-systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
```
Build this image:
docker build --rm -t local/c7-systemd-httpd
```console
$ docker build --rm -t local/c7-systemd-httpd
```
## Running a systemd enabled app container
In order to run a container with systemd, you will need to use the `--privileged` option mentioned earlier, as well as mounting the cgroups volumes from the host. Below is an example command that will run the systemd enabled httpd container created earlier.
docker run --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
```console
$ docker run --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
```
This container is running with systemd in a limited context, but it must always be run as a privileged container with the cgroups filesystem mounted.

View File

@ -19,26 +19,32 @@ Clojure is a dialect of the Lisp programming language. It is a general-purpose p
Since the most common way to use Clojure is in conjunction with [Leiningen (`lein`)](http://leiningen.org/), this image assumes that's how you'll be working. The most straightforward way to use this image is to add a `Dockerfile` to an existing Leiningen/Clojure project:
FROM clojure
COPY . /usr/src/app
WORKDIR /usr/src/app
CMD ["lein", "run"]
```dockerfile
FROM clojure
COPY . /usr/src/app
WORKDIR /usr/src/app
CMD ["lein", "run"]
```
Then, run these commands to build and run the image:
docker build -t my-clojure-app .
docker run -it --rm --name my-running-app my-clojure-app
```console
$ docker build -t my-clojure-app .
$ docker run -it --rm --name my-running-app my-clojure-app
```
While the above is the most straightforward example of a `Dockerfile`, it does have some drawbacks. The `lein run` command will download your dependencies, compile the project, and then run it. That's a lot of work, all of which you may not want done every time you run the image. To get around this, you can download the dependencies and compile the project ahead of time. This will significantly reduce startup time when you run your image.
FROM clojure
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY project.clj /usr/src/app/
RUN lein deps
COPY . /usr/src/app
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar
CMD ["java", "-jar", "app-standalone.jar"]
```dockerfile
FROM clojure
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY project.clj /usr/src/app/
RUN lein deps
COPY . /usr/src/app
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar
CMD ["java", "-jar", "app-standalone.jar"]
```
Writing the `Dockerfile` this way will download the dependencies (and cache them, so they are only re-downloaded when the dependencies change) and then compile them into a standalone jar ahead of time rather than each time the image is run.
@ -48,7 +54,9 @@ You can then build and run the image as above.
If you have an existing Lein/Clojure project, it's fairly straightforward to compile your project into a jar from a container:
docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar
```console
$ docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar
```
This will build your project into a jar file located in your project's `target/uberjar` directory.

View File

@ -20,7 +20,9 @@ For support, please visit the [Couchbase support forum](https://forums.couchbase
# How to use this image: QuickStart
docker run -d -p 8091:8091 couchbase
```console
$ docker run -d -p 8091:8091 couchbase
```
At this point go to http://localhost:8091 from the host machine to see the Admin Console web UI. More details and screenshots are given below in the **Single host, single container** section.
@ -36,9 +38,11 @@ There are several deployment scenarios which this Docker image can easily suppor
A Couchbase Server Docker container will write all persistent and node-specific data under the directory `/opt/couchbase/var`. As this directory is declared to be a Docker volume, it will be persisted outside the normal union filesystem. This results in improved performance. It also allows you to easily migrate to a container running an updated point release of Couchbase Server without losing your data with a process like this:
docker stop my-couchbase-container
docker run -d --name my-new-couchbase-container --volumes-from my-couchbase-container ....
docker rm my-couchbase-container
```console
$ docker stop my-couchbase-container
$ docker run -d --name my-new-couchbase-container --volumes-from my-couchbase-container ....
$ docker rm my-couchbase-container
```
By default, the persisted location of the volume on your Docker host will be hidden away in a location managed by the Docker daemon. In order to control its location - in particular, to ensure that it is on a partition with sufficient disk space for your server - we recommend mapping the volume to a specific directory on the host filesystem using the `-v` option to `docker run`.
@ -48,21 +52,27 @@ All of the example commands below will assume you are using volumes mapped to ho
If you have SELinux enabled, mounting host volumes in a container requires an extra step. Assuming you are mounting the `~/couchbase` directory on the host filesystem, you will need to run the following command once before running your first container on that host:
mkdir ~/couchbase && chcon -Rt svirt_sandbox_file_t ~/couchbase
```console
$ mkdir ~/couchbase && chcon -Rt svirt_sandbox_file_t ~/couchbase
```
## Ulimits
Couchbase normally expects the following changes to ulimits:
ulimit -n 40960 # nofile: max number of open files
ulimit -c unlimited # core: max core file size
ulimit -l unlimited # memlock: maximum locked-in-memory address space
```console
$ ulimit -n 40960 # nofile: max number of open files
$ ulimit -c unlimited # core: max core file size
$ ulimit -l unlimited # memlock: maximum locked-in-memory address space
```
These ulimit settings are necessary when running under heavy load; but if you are just doing light testing and development, you can omit these settings and everything will still work.
In order to set the ulimits in your container, you will need to run Couchbase Docker containers with the following additional `--ulimit` flags:
docker run -d --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 couchbase
```console
$ docker run -d --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 couchbase
```
Since `unlimited` is not supported as a value, it sets the core and memlock values to 100 GB. If your system has more than 100 GB RAM, you will want to increase this value to match the available RAM on the system.
@ -88,16 +98,20 @@ This is a quick way to try out Couchbase Server on your own machine with no inst
**Start the container**
docker run -d -v ~/couchbase:/opt/couchbase/var -p 8091:8091 --name my-couchbase-server couchbase
```console
$ docker run -d -v ~/couchbase:/opt/couchbase/var -p 8091:8091 --name my-couchbase-server couchbase
```
We use the --name option to make it easier to refer to this running container in future.
We use the `--name` option to make it easier to refer to this running container in future.
**Verify container start**
Use the container name you specified (eg. `my-couchbase-server`) to view the logs:
$ docker logs my-couchbase-server
Starting Couchbase Server -- Web UI available at http://<ip>:8091
```console
$ docker logs my-couchbase-server
Starting Couchbase Server -- Web UI available at http://<ip>:8091
```
**Connect to the Admin Console**
@ -145,9 +159,11 @@ You should run the SDK on the host and point it to `http://localhost:8091/pools`
You can choose to mount `/opt/couchbase/var` from the host, however you *must give each container a separate host directory*.
docker run -d -v ~/couchbase/node1:/opt/couchbase/var couchbase
docker run -d -v ~/couchbase/node2:/opt/couchbase/var couchbase
docker run -d -v ~/couchbase/node3:/opt/couchbase/var -p 8091:8091 couchbase
```console
$ docker run -d -v ~/couchbase/node1:/opt/couchbase/var couchbase
$ docker run -d -v ~/couchbase/node2:/opt/couchbase/var couchbase
$ docker run -d -v ~/couchbase/node3:/opt/couchbase/var -p 8091:8091 couchbase
```
**Setting up your Couchbase cluster**
@ -201,7 +217,9 @@ Using the `--net=host` flag will have the following effects:
Start a container on *each host* via:
docker run -d -v ~/couchbase:/opt/couchbase/var --net=host couchbase
```console
$ docker run -d -v ~/couchbase:/opt/couchbase/var --net=host couchbase
```
To configure Couchbase Server:

View File

@ -15,19 +15,27 @@ Crate is an Elastic SQL Data Store. Distributed by design, Crate makes centraliz
## How to use this image
docker run -d -p 4200:4200 -p 4300:4300 crate:latest
```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate:latest
```
### Attach persistent data directory
docker run -d -p 4200:4200 -p 4300:4300 -v <data-dir>:/data crate
```console
$ docker run -d -p 4200:4200 -p 4300:4300 -v <data-dir>:/data crate
```
### Use custom Crate configuration
docker run -d -p 4200:4200 -p 4300:4300 crate -Des.config=/path/to/crate.yml
```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate -Des.config=/path/to/crate.yml
```
Any configuration settings may be specified upon startup using the `-D` option prefix. For example, configuring the cluster name by using system properties will work this way:
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.cluster.name=cluster
```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.cluster.name=cluster
```
For further configuration options please refer to the [Configuration](https://crate.io/docs/stable/configuration.html) section of the online documentation.
@ -37,7 +45,9 @@ To set environment variables for Crate Data you need to use the `--env` option w
For example, setting the heap size:
docker run -d -p 4200:4200 -p 4300:4300 --env CRATE_HEAP_SIZE=32g crate
```console
$ docker run -d -p 4200:4200 -p 4300:4300 --env CRATE_HEAP_SIZE=32g crate
```
## Multicast
@ -47,30 +57,36 @@ You can enable unicast in your custom `crate.yml`. See also: [Crate Multi Node S
Due to its architecture, Crate publishes the host it runs on for discovery within the cluster. Since the address of the host inside the docker container differs from the actual host the docker image is running on, you need to tell Crate to publish the address of the docker host for discovery.
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.network.publish_host=host1.example.com:
```console
$ docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.network.publish_host=host1.example.com:
```
If you change the transport port from the default `4300` to something else, you also need to pass the publish port to Crate.
docker run -d -p 4200:4200 -p 4321:4300 crate crate -Des.transport.publish_port=4321
```console
$ docker run -d -p 4200:4200 -p 4321:4300 crate crate -Des.transport.publish_port=4321
```
### Example Usage in a Multinode Setup
HOSTS='crate1.example.com:4300,crate2.example.com:4300,crate3.example.com:4300'
HOST=crate1.example.com
docker run -d \
-p 4200:4200 \
-p 4300:4300 \
--name node1 \
--volume /mnt/data:/data \
--env CRATE_HEAP_SIZE=8g \
crate:latest \
crate -Des.cluster.name=cratecluster \
-Des.node.name=crate1 \
-Des.transport.publish_port=4300 \
-Des.network.publish_host=$HOST \
-Des.multicast.enabled=false \
-Des.discovery.zen.ping.unicast.hosts=$HOSTS \
-Des.discovery.zen.minimum_master_nodes=2
```console
$ HOSTS='crate1.example.com:4300,crate2.example.com:4300,crate3.example.com:4300'
$ HOST=crate1.example.com
$ docker run -d \
-p 4200:4200 \
-p 4300:4300 \
--name node1 \
--volume /mnt/data:/data \
--env CRATE_HEAP_SIZE=8g \
crate:latest \
crate -Des.cluster.name=cratecluster \
-Des.node.name=crate1 \
-Des.transport.publish_port=4300 \
-Des.network.publish_host=$HOST \
-Des.multicast.enabled=false \
-Des.discovery.zen.ping.unicast.hosts=$HOSTS \
-Des.discovery.zen.minimum_master_nodes=2
```
# License

10
debian/README.md vendored
View File

@ -37,10 +37,12 @@ http://httpredir.debian.org/debian testing main`).
The mirror of choice for these images is [httpredir.debian.org](http://httpredir.debian.org) so that it's as close to optimal as possible, regardless of location or connection.
$ docker run debian:jessie cat /etc/apt/sources.list
deb http://httpredir.debian.org/debian jessie main
deb http://httpredir.debian.org/debian jessie-updates main
deb http://security.debian.org jessie/updates main
```console
$ docker run debian:jessie cat /etc/apt/sources.list
deb http://httpredir.debian.org/debian jessie main
deb http://httpredir.debian.org/debian jessie-updates main
deb http://security.debian.org jessie/updates main
```
# Supported Docker versions

View File

@ -19,7 +19,9 @@ Django is a free and open source web application framework, written in Python, w
## Create a `Dockerfile` in your Django app project
FROM django:onbuild
```dockerfile
FROM django:onbuild
```
Put this file in the root of your app, next to the `requirements.txt`.
@ -27,24 +29,32 @@ This image includes multiple `ONBUILD` triggers which should cover most applicat
You can then build and run the Docker image:
docker build -t my-django-app .
docker run --name some-django-app -d my-django-app
```console
$ docker build -t my-django-app .
$ docker run --name some-django-app -d my-django-app
```
You can test it by visiting `http://container-ip:8000` in a browser or, if you need access outside the host, on `http://localhost:8000` with the following command:
docker run --name some-django-app -p 8000:8000 -d my-django-app
```console
$ docker run --name some-django-app -p 8000:8000 -d my-django-app
```
## Without a `Dockerfile`
Of course, if you don't want to take advantage of magical and convenient `ONBUILD` triggers, you can always just use `docker run` directly to avoid having to add a `Dockerfile` to your project.
docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000 -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
```console
$ docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000 -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
```
## Bootstrap a new Django Application
If you want to generate the scaffolding for a new Django project, you can do the following:
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app django django-admin.py startproject mysite
```console
$ docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app django django-admin.py startproject mysite
```
This will create a sub-directory named `mysite` inside your current directory.

View File

@ -17,11 +17,15 @@ Drupal is a free and open-source content-management framework written in PHP and
The basic pattern for starting a `drupal` instance is:
docker run --name some-drupal -d drupal
```console
$ docker run --name some-drupal -d drupal
```
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-drupal -p 8080:80 -d drupal
```console
$ docker run --name some-drupal -p 8080:80 -d drupal
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
@ -31,7 +35,9 @@ When first accessing the webserver provided by this image, it will go through a
## MySQL
docker run --name some-drupal --link some-mysql:mysql -d drupal
```console
$ docker run --name some-drupal --link some-mysql:mysql -d drupal
```
- Database type: `MySQL, MariaDB, or equivalent`
- Database name/username/password: `<details for accessing your MySQL instance>` (`MYSQL_USER`, `MYSQL_PASSWORD`, `MYSQL_DATABASE`; see environment variables in the description for [`mysql`](https://registry.hub.docker.com/_/mysql/))
@ -39,7 +45,9 @@ When first accessing the webserver provided by this image, it will go through a
## PostgreSQL
docker run --name some-drupal --link some-postgres:postgres -d drupal
```console
$ docker run --name some-drupal --link some-postgres:postgres -d drupal
```
- Database type: `PostgreSQL`
- Database name/username/password: `<details for accessing your PostgreSQL instance>` (`POSTGRES_USER`, `POSTGRES_PASSWORD`; see environment variables in the description for [`postgres`](https://registry.hub.docker.com/_/postgres/))

View File

@ -22,19 +22,27 @@ Elasticsearch is a registered trademark of Elasticsearch BV.
You can run the default `elasticsearch` command simply:
docker run -d elasticsearch
```console
$ docker run -d elasticsearch
```
You can also pass in additional flags to `elasticsearch`:
docker run -d elasticsearch elasticsearch -Des.node.name="TestNode"
```console
$ docker run -d elasticsearch elasticsearch -Des.node.name="TestNode"
```
This image comes with a default set of configuration files for `elasticsearch`, but if you want to provide your own set of configuration files, you can do so via a volume mounted at `/usr/share/elasticsearch/config`:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
```console
$ docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
```
This image is configured with a volume at `/usr/share/elasticsearch/data` to hold the persisted index data. Use that path if you would like to keep the data in a mounted volume:
docker run -d -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch
```console
$ docker run -d -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch
```
This image includes `EXPOSE 9200 9300` ([default `http.port`](http://www.elastic.co/guide/en/elasticsearch/reference/1.5/modules-http.html)), so standard container linking will make it automatically available to the linked containers.

View File

@ -19,10 +19,12 @@ Fedora rawhide is available via `fedora:rawhide` and Fedora 20 via `fedora:20` a
The metalink `http://mirrors.fedoraproject.org` is used to automatically select a mirror site (both for building the image as well as for the yum repos in the container image).
$ docker run fedora cat /etc/yum.repos.d/fedora.repo | grep metalink
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
```console
$ docker run fedora cat /etc/yum.repos.d/fedora.repo | grep metalink
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
```
# Supported Docker versions

View File

@ -19,14 +19,18 @@ Robot simulation is an essential tool in every roboticist's toolbox. A well-desi
## Create a `Dockerfile` in your Gazebo project
FROM gazebo:gzserver5
# place here your application's setup specifics
CMD [ "gzserver", "my-gazebo-app-args" ]
```dockerfile
FROM gazebo:gzserver5
# place here your application's setup specifics
CMD [ "gzserver", "my-gazebo-app-args" ]
```
You can then build and run the Docker image:
docker build -t my-gazebo-app .
docker run -it -v="/tmp/.gazebo/:/root/.gazebo/" --name my-running-app my-gazebo-app
```console
$ docker build -t my-gazebo-app .
$ docker run -it -v="/tmp/.gazebo/:/root/.gazebo/" --name my-running-app my-gazebo-app
```
## Deployment use cases
@ -46,7 +50,9 @@ Gazebo uses the `~/.gazebo/` directory for storing logs, models and scene info.
For example, if one wishes to use their own `.gazebo` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument:
docker run -v "/home/ubuntu/.gazebo/:/root/.gazebo/" gazebo
```console
$ docker run -v "/home/ubuntu/.gazebo/:/root/.gazebo/" gazebo
```
One thing to be careful about is that gzserver logs to files named `/root/.gazebo/server-<port>/*.log`, where `<port>` is the port number that server is listening on (11345 by default). If you run and mount multiple containers using the same default port and same host side directory, then they will collide and attempt writing to the same file. If you want to run multiple gzservers on the same docker host, then a bit more clever volume mounting of `~/.gazebo/` subfolders would be required.
@ -64,49 +70,67 @@ In this short example, we'll spin up a new container running gazebo server, conn
> First launch a gazebo server with a mounted volume for logging and name the container gazebo:
docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo gazebo
```console
$ docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo gazebo
```
> Now open a new bash session in the container using the same entrypoint to configure the environment. Then download the double_pendulum model and load it into the simulation.
docker exec -it gazebo bash
curl -o double_pendulum.sdf http://models.gazebosim.org/double_pendulum_with_base/model-1_4.sdf
gz model --model-name double_pendulum --spawn-file double_pendulum.sdf
```console
$ docker exec -it gazebo bash
$ curl -o double_pendulum.sdf http://models.gazebosim.org/double_pendulum_with_base/model-1_4.sdf
$ gz model --model-name double_pendulum --spawn-file double_pendulum.sdf
```
> To start recording the running simulation, simply use [`gz log`](http://www.gazebosim.org/tutorials?tut=log_filtering&cat=tools_utilities) to do so.
gz log --record 1
```console
$ gz log --record 1
```
> After a few seconds, go ahead and stop recording by disabling the same flag.
gz log --record 0
```console
$ gz log --record 0
```
> To introspect our logged recording, we can navigate to log directory and use `gz log` to open and examine the motion and joint state of the pendulum. This will allow you to step through the poses of the pendulum links.
cd ~/.gazebo/log/*/gzserver/
gz log --step --hz 10 --filter *.pose/*.pose --file state.log
```console
$ cd ~/.gazebo/log/*/gzserver/
$ gz log --step --hz 10 --filter *.pose/*.pose --file state.log
```
> If you have an equivalent release of Gazebo installed locally, you can connect to the gzserver inside the container using gzclient GUI by setting the address of the master URI to the containers public address.
export GAZEBO_MASTER_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' gazebo)
export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345
gzclient --verbose
```console
$ export GAZEBO_MASTER_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' gazebo)
$ export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345
$ gzclient --verbose
```
> In the rendered OpenGL view with gzclient you should see the moving double pendulum created prior still oscillating. From here you can control or monitor state of the simulation using the graphical interface, add more pendulums, reset the world, make more logs, etc. To quit the simulation, close the gzclient window and stop the container.
docker stop gazebo
docker rm gazebo
```console
$ docker stop gazebo
$ docker rm gazebo
```
> Even though our old gazebo container has been removed, we can still see that our record log has been preserved in the host volume directory.
cd /tmp/.gazebo/log/
ls
```console
$ cd /tmp/.gazebo/log/
$ ls
```
> Again, if you have an equivalent release of Gazebo installed on your host system, you can play back the simulation with gazebo by using the recorded log file.
export GAZEBO_MASTER_IP=127.0.0.1
export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345
cd /tmp/.gazebo/log/*/gzserver/
gazebo --verbose --play state.log
```console
$ export GAZEBO_MASTER_IP=127.0.0.1
$ export GAZEBO_MASTER_URI=$GAZEBO_MASTER_IP:11345
$ cd /tmp/.gazebo/log/*/gzserver/
$ gazebo --verbose --play state.log
```
# More Resources

View File

@ -21,26 +21,34 @@ The GNU Compiler Collection (GCC) is a compiler system produced by the GNU Proje
The most straightforward way to use this image is to use a gcc container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM gcc:4.9
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN gcc -o myapp main.c
CMD ["./myapp"]
```dockerfile
FROM gcc:4.9
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN gcc -o myapp main.c
CMD ["./myapp"]
```
Then, build and run the Docker image:
docker build -t my-gcc-app .
docker run -it --rm --name my-running-app my-gcc-app
```console
$ docker build -t my-gcc-app .
$ docker run -it --rm --name my-running-app my-gcc-app
```
## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c
```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c
```
This will add your current directory, as a volume, to the container, set the working directory to the volume, and run the command `gcc -o myapp myapp.c.` This tells gcc to compile the code in `myapp.c` and output the executable to myapp. Alternatively, if you have a `Makefile`, you can instead run the `make` command inside your container:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make
```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make
```
# License

View File

@ -14,23 +14,31 @@ Ghost is a free and open source blogging platform written in JavaScript and dist
# How to use this image
docker run --name some-ghost -d ghost
```console
$ docker run --name some-ghost -d ghost
```
This will start a Ghost instance listening on the default Ghost port of 2368.
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-ghost -p 8080:2368 -d ghost
```console
$ docker run --name some-ghost -p 8080:2368 -d ghost
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
You can also point the image to your existing content on your host:
docker run --name some-ghost -v /path/to/ghost/blog:/var/lib/ghost ghost
```console
$ docker run --name some-ghost -v /path/to/ghost/blog:/var/lib/ghost ghost
```
Alternatively you can use a [data container](http://docs.docker.com/userguide/dockervolumes/) that has a volume that points to `/var/lib/ghost` and then reference it:
docker run --name some-ghost --volumes-from some-ghost-data ghost
```console
$ docker run --name some-ghost --volumes-from some-ghost-data ghost
```
# Supported Docker versions

View File

@ -27,7 +27,9 @@ Go (a.k.a., Golang) is a programming language first developed at Google. It is a
The most straightforward way to use this image is to use a Go container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM golang:1.3-onbuild
```dockerfile
FROM golang:1.3-onbuild
```
This image includes multiple `ONBUILD` triggers which should cover most applications. The build will `COPY . /usr/src/app`, `RUN go get -d -v`, and `RUN
go install -v`.
@ -36,33 +38,43 @@ This image also includes the `CMD ["app"]` instruction which is the default comm
You can then build and run the Docker image:
docker build -t my-golang-app .
docker run -it --rm --name my-running-app my-golang-app
```console
$ docker build -t my-golang-app .
$ docker run -it --rm --name my-running-app my-golang-app
```
## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 go build -v
```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 go build -v
```
This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `go build` which will tell go to compile the project in the working directory and output the executable to `myapp`. Alternatively, if you have a `Makefile`, you can run the `make` command inside your container.
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 make
```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 make
```
## Cross-compile your app inside the Docker container
If you need to compile your application for a platform other than `linux/amd64` (such as `windows/386`), this can be easily accomplished with the provided `cross` tags:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp -e GOOS=windows -e GOARCH=386 golang:1.3-cross go build -v
```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp -e GOOS=windows -e GOARCH=386 golang:1.3-cross go build -v
```
Alternatively, you can build for multiple platforms at once:
docker run --rm -it -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3-cross bash
$ for GOOS in darwin linux; do
> for GOARCH in 386 amd64; do
> go build -v -o myapp-$GOOS-$GOARCH
> done
> done
```console
$ docker run --rm -it -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3-cross bash
$ for GOOS in darwin linux; do
> for GOARCH in 386 amd64; do
> go build -v -o myapp-$GOOS-$GOARCH
> done
> done
```
# Image Variants

View File

@ -25,17 +25,23 @@ Note: Many configuration examples propose to put `daemon` into the `global` sect
## Create a `Dockerfile`
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
```dockerfile
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
```
Build and run:
docker build -t my-haproxy .
docker run -d --name my-running-haproxy my-haproxy
```console
$ docker build -t my-haproxy .
$ docker run -d --name my-running-haproxy my-haproxy
```
## Directly via bind mount
docker run -d --name my-running-haproxy -v /path/to/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.5
```console
$ docker run -d --name my-running-haproxy -v /path/to/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.5
```
# License

View File

@ -30,40 +30,46 @@ The most recent GHC release in the 7.8 series is also available, though no longe
Start an interactive interpreter session with `ghci`:
$ docker run -it --rm haskell:7.10
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help
Prelude>
```console
$ docker run -it --rm haskell:7.10
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help
Prelude>
```
Dockerize a [Hackage](http://hackage.haskell.org) app with a Dockerfile inheriting from the base image:
FROM haskell:7.8
RUN cabal update && cabal install MazesOfMonad
VOLUME /root/.MazesOfMonad
ENTRYPOINT ["/root/.cabal/bin/mazesofmonad"]
```dockerfile
FROM haskell:7.8
RUN cabal update && cabal install MazesOfMonad
VOLUME /root/.MazesOfMonad
ENTRYPOINT ["/root/.cabal/bin/mazesofmonad"]
```
Iteratively develop then ship a Haskell app with a Dockerfile utilizing the build cache:
FROM haskell:7.8
RUN cabal update
# Add .cabal file
ADD ./server/snap-example.cabal /opt/server/snap-example.cabal
# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies
RUN cd /opt/server && cabal install --only-dependencies -j4
# Add and Install Application Code
ADD ./server /opt/server
RUN cd /opt/server && cabal install
# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH
# Default Command for Container
WORKDIR /opt/server
CMD ["snap-example"]
```dockerfile
FROM haskell:7.8
RUN cabal update
# Add .cabal file
ADD ./server/snap-example.cabal /opt/server/snap-example.cabal
# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies
RUN cd /opt/server && cabal install --only-dependencies -j4
# Add and Install Application Code
ADD ./server /opt/server
RUN cd /opt/server && cabal install
# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH
# Default Command for Container
WORKDIR /opt/server
CMD ["snap-example"]
```
### Examples

View File

@ -6,32 +6,34 @@ For more information about this image and its history, please see the [relevant
# Example output
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/userguide/
$ docker images hello-world
REPOSITORY TAG IMAGE ID VIRTUAL SIZE
hello-world latest af340544ed62 960 B
```console
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/userguide/
$ docker images hello-world
REPOSITORY TAG IMAGE ID VIRTUAL SIZE
hello-world latest af340544ed62 960 B
```
![logo](https://raw.githubusercontent.com/docker-library/docs/master/hello-world/logo.png)

View File

@ -19,26 +19,34 @@ This image only contains Apache httpd with the defaults from upstream. There is
### Create a `Dockerfile` in your project
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
```dockerfile
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
```
Then, run the commands to build and run the Docker image:
docker build -t my-apache2 .
docker run -it --rm --name my-running-app my-apache2
```console
$ docker build -t my-apache2 .
$ docker run -it --rm --name my-running-app my-apache2
```
### Without a `Dockerfile`
If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
```console
$ docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
```
### Configuration
To customize the configuration of the httpd server, just `COPY` your custom configuration in as `/usr/local/apache2/conf/httpd.conf`.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
```dockerfile
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
```
#### SSL/HTTPS

View File

@ -16,21 +16,27 @@ Hy (a.k.a., Hylang) is a dialect of the Lisp programming language designed to in
## Create a `Dockerfile` in your Hy project
FROM hylang:0.10
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "hy", "./your-daemon-or-script.hy" ]
```dockerfile
FROM hylang:0.10
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "hy", "./your-daemon-or-script.hy" ]
```
You can then build and run the Docker image:
docker build -t my-hylang-app
docker run -it --rm --name my-running-app my-hylang-app
```console
$ docker build -t my-hylang-app
$ docker run -it --rm --name my-running-app my-hylang-app
```
## Run a single Hy script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Hy script by using the Hy Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp hylang:0.10 hy your-daemon-or-script.hy
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp hylang:0.10 hy your-daemon-or-script.hy
```
# License

View File

@ -28,20 +28,26 @@ This project aims to continue development of io.js under an "open governance mod
If you want to distribute your application on the docker registry, create a `Dockerfile` in the root of application directory:
FROM iojs:onbuild
# Expose the ports that your app uses. For example:
EXPOSE 8080
```dockerfile
FROM iojs:onbuild
# Expose the ports that your app uses. For example:
EXPOSE 8080
```
Then simply run:
$ docker build -t iojs-app
...
$ docker run --rm -it iojs-app
```console
$ docker build -t iojs-app
...
$ docker run --rm -it iojs-app
```
To run a single script, you can mount it in a volume under `/usr/src/app`. From the root of your application directory (assuming your script is named `index.js`):
$ docker run -v "$PWD":/usr/src/app -w /usr/src/app -it --rm iojs iojs index.js
```console
$ docker run -v "$PWD":/usr/src/app -w /usr/src/app -it --rm iojs iojs index.js
```
# Image Variants

View File

@ -22,25 +22,31 @@ Be sure to also checkout the [awesome scripts](https://github.com/irssi/scripts.
On a Linux system, build and launch a container named `my-running-irssi` like this:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
-v /etc/localtime:/etc/localtime:ro \
irssi
```console
$ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
-v /etc/localtime:/etc/localtime:ro \
irssi
```
On a Mac OS X system, run the same image using:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
irssi
```console
$ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
irssi
```
You omit `/etc/localtime` on Mac OS X because `boot2docker` doesn't use this file.
Of course, you can name your image anything you like. In Docker 1.5 you can also use the `--read-only` mount flag. For example, on Linux:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
--read-only -v $HOME/.irssi:/home/user/.irssi \
-v /etc/localtime:/etc/localtime \
irssi
```console
$ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
--read-only -v $HOME/.irssi:/home/user/.irssi \
-v /etc/localtime:/etc/localtime \
irssi
```
# License

View File

@ -25,22 +25,28 @@ Java is a registered trademark of Oracle and/or its affiliates.
The most straightforward way to use this image is to use a Java container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM java:7
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN javac Main.java
CMD ["java", "Main"]
```dockerfile
FROM java:7
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN javac Main.java
CMD ["java", "Main"]
```
You can then run and build the Docker image:
docker build -t my-java-app .
docker run -it --rm --name my-running-app my-java-app
```console
$ docker build -t my-java-app .
$ docker run -it --rm --name my-running-app my-java-app
```
## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp java:7 javac Main.java
```console
$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp java:7 javac Main.java
```
This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `javac Main.java` which will tell Java to compile the code in `Main.java` and output the Java class file to `Main.class`.

View File

@ -14,11 +14,15 @@ This is a fully functional Jenkins server, based on the Long Term Support releas
# How to use this image
docker run -p 8080:8080 jenkins
```console
$ docker run -p 8080:8080 jenkins
```
This will store the workspace in /var/jenkins_home. All Jenkins data lives in there - including plugins and configuration. You will probably want to make that a persistent volume:
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
```console
$ docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
```
The volume for the "myjenkins" named container will then be persistent.
@ -26,7 +30,9 @@ You can also bind mount in a volume from the host:
First, ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 102 normally - or use -u root), then:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins
```console
$ docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins
```
## Backing up data

View File

@ -18,11 +18,15 @@ Jetty is a pure Java-based HTTP (Web) server and Java Servlet container. While W
Run the default Jetty server (`CMD ["jetty.sh", "run"]`):
docker run -d jetty:9
```console
$ docker run -d jetty:9
```
You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888:
docker run -d -p 8888:8080 jetty:9
```console
$ docker run -d -p 8888:8080 jetty:9
```
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser.
@ -46,7 +50,9 @@ For older EOL'd images based on Jetty 7 or Jetty 8, please follow the [legacy in
To run `jetty` as a read-only container, have Docker create the `/tmp/jetty` and `/run/jetty` directories as volumes:
docker run -d --read-only -v /tmp/jetty -v /run/jetty jetty:9
```console
$ docker run -d --read-only -v /tmp/jetty -v /run/jetty jetty:9
```
Since the container is read-only, you'll need to either mount in your webapps directory with `-v /path/to/my/webapps:/var/lib/jetty/webapps` or by populating `/var/lib/jetty/webapps` in a derived image.
@ -56,7 +62,9 @@ By default, this image starts as user `root` and uses Jetty's `setuid` module to
If you would like the image to start immediately as user `jetty` instead of starting as `root`, you can start the container with `-u jetty`:
docker run -d -u jetty jetty:9
```console
$ docker run -d -u jetty jetty:9
```
# License

View File

@ -15,7 +15,9 @@ Joomla is a free and open-source content management system (CMS) for publishing
# How to use this image
docker run --name some-joomla --link some-mysql:mysql -d joomla
```console
$ docker run --name some-joomla --link some-mysql:mysql -d joomla
```
The following environment variables are also honored for configuring your Joomla instance:
@ -28,30 +30,36 @@ If the `JOOMLA_DB_NAME` specified does not already exist on the given MySQL serv
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-joomla --link some-mysql:mysql -p 8080:80 -d joomla
```console
$ docker run --name some-joomla --link some-mysql:mysql -p 8080:80 -d joomla
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
If you'd like to use an external database instead of a linked `mysql` container, specify the hostname and port with `JOOMLA_DB_HOST` along with the password in `JOOMLA_DB_PASSWORD` and the username in `JOOMLA_DB_USER` (if it is something other than `root`):
docker run --name some-joomla -e JOOMLA_DB_HOST=10.1.2.3:3306 \
-e JOOMLA_DB_USER=... -e JOOMLA_DB_PASSWORD=... -d joomla
```console
$ docker run --name some-joomla -e JOOMLA_DB_HOST=10.1.2.3:3306 \
-e JOOMLA_DB_USER=... -e JOOMLA_DB_PASSWORD=... -d joomla
```
## ... via [`docker-compose`](https://github.com/docker/compose)
Example `docker-compose.yml` for `joomla`:
joomla:
image: joomla
links:
- joomladb:mysql
ports:
- 8080:80
joomladb:
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: example
```yaml
joomla:
image: joomla
links:
- joomladb:mysql
ports:
- 8080:80
joomladb:
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: example
```
Run `docker-compose up`, wait for it to initialize completely, and visit `http://localhost:8080` or `http://host-ip:8080`.

View File

@ -25,8 +25,10 @@ JRuby leverages the robustness and speed of the JVM while providing the same Rub
## Create a `Dockerfile` in your Ruby app project
FROM jruby:1.7-onbuild
CMD ["./your-daemon-or-script.rb"]
```dockerfile
FROM jruby:1.7-onbuild
CMD ["./your-daemon-or-script.rb"]
```
Put this file in the root of your app, next to the `Gemfile`.
@ -34,20 +36,26 @@ This image includes multiple `ONBUILD` triggers which should be all you need to
You can then build and run the Ruby image:
docker build -t my-ruby-app .
docker run -it --name my-running-script my-ruby-app
```console
$ docker build -t my-ruby-app .
$ docker run -it --name my-running-script my-ruby-app
```
### Generate a `Gemfile.lock`
The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle install --system
```console
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle install --system
```
## Run a single Ruby script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp jruby:1.7 jruby your-daemon-or-script.rb
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp jruby:1.7 jruby your-daemon-or-script.rb
```
# License

View File

@ -18,11 +18,15 @@ Julia is a high-level, high-performance dynamic programming language for technic
Starting the Julia REPL is as easy as the following:
docker run -it --rm julia
```console
$ docker run -it --rm julia
```
## Run Julia script from your local directory inside container
docker run -it --rm -v "$PWD":/usr/myapp -w /usr/myapp julia julia script.jl arg1 arg2
```console
$ docker run -it --rm -v "$PWD":/usr/myapp -w /usr/myapp julia julia script.jl arg1 arg2
```
# License

View File

@ -16,7 +16,9 @@ The Kaazing Gateway is a network gateway created to provide a single access poin
By default the gateway runs a WebSocket echo service similar to [websocket.org](https://www.websocket.org/echo.html).
docker run --name some-kaazing-gateway -h somehostname -d -p 8000:8000 kaazing-gateway
```console
$ docker run --name some-kaazing-gateway -h somehostname -d -p 8000:8000 kaazing-gateway
```
You should then be able to connect to ws://somehostname:8000 from the [WebSocket echo test](https://www.websocket.org/echo.html).
@ -26,22 +28,30 @@ Note: this assumes that `somehostname` is resolvable from your browser, you may
To launch a container with a specific configuration you can do the following:
docker run --name some-kaazing-gateway -v /some/gateway-config.xml:/kaazing-gateway/conf/gateway-config.xml:ro -d kaazing-gateway
```console
$ docker run --name some-kaazing-gateway -v /some/gateway-config.xml:/kaazing-gateway/conf/gateway-config.xml:ro -d kaazing-gateway
```
For information on the syntax of the Kaazing Gateway configuration files, see [the official documentation](http://developer.kaazing.com/documentation/5.0/index.html) (specifically the [Configuration Guide](http://developer.kaazing.com/documentation/5.0/admin-reference/r_conf_elementindex.html)).
If you wish to adapt the default Gateway configuration file, you can use a command such as the following to copy the file from a running Kaazing Gateway container:
docker cp some-kaazing:/conf/gateway-config-minimal.xml /some/gateway-config.xml
```console
$ docker cp some-kaazing:/conf/gateway-config-minimal.xml /some/gateway-config.xml
```
As above, this can also be accomplished more cleanly using a simple `Dockerfile`:
FROM kaazing-gateway
COPY gateway-config.xml /conf/gateway-config.xml
```dockerfile
FROM kaazing-gateway
COPY gateway-config.xml /conf/gateway-config.xml
```
Then, build with `docker build -t some-custom-kaazing-gateway .` and run:
docker run --name some-kaazing-gateway -d some-custom-kaazing-gateway
```console
$ docker run --name some-kaazing-gateway -d some-custom-kaazing-gateway
```
# License

View File

@ -19,19 +19,27 @@ Kibana is a registered trademark of Elasticsearch BV.
You can run the default `kibana` command simply:
docker run --link some-elasticsearch:elasticsearch -d kibana
```console
$ docker run --link some-elasticsearch:elasticsearch -d kibana
```
You can also pass in additional flags to `kibana`:
docker run --link some-elasticsearch:elasticsearch -d kibana --plugins /somewhere/else
```console
$ docker run --link some-elasticsearch:elasticsearch -d kibana --plugins /somewhere/else
```
This image includes `EXPOSE 5601` ([default `port`](https://www.elastic.co/guide/en/kibana/current/_setting_kibana_server_properties.html)). If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-kibana --link some-elasticsearch:elasticsearch -p 5601:5601 -d kibana
```console
$ docker run --name some-kibana --link some-elasticsearch:elasticsearch -p 5601:5601 -d kibana
```
You can also provide the address of elasticsearch via `ELASTICSEARCH_URL` environnement variable:
docker run --name some-kibana -e ELASTICSEARCH_URL=http://some-elasticsearch:9200 -p 5601:5601 -d kibana
```console
$ docker run --name some-kibana -e ELASTICSEARCH_URL=http://some-elasticsearch:9200 -p 5601:5601 -d kibana
```
Then, access it via `http://localhost:5601` or `http://host-ip:5601` in a browser.

View File

@ -18,13 +18,17 @@ Logstash is a tool that can be used to collect, process and forward events and l
If you need to run logstash with configuration provided on the commandline, you can use the logstash image as follows:
docker run -it --rm logstash logstash -e 'input { stdin { } } output { stdout { } }'
```console
$ docker run -it --rm logstash logstash -e 'input { stdin { } } output { stdout { } }'
```
## Start Logstash with configuration file
If you need to run logstash with a configuration file, `logstash.conf`, that's located in your current directory, you can use the logstash image as follows:
docker run -it --rm -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf
```console
$ docker run -it --rm -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf
```
# License

View File

@ -25,9 +25,11 @@ To date, Mageia:
## Create a Dockerfile for your container
FROM mageia:4
MAINTAINER "Foo Bar" <foo@bar.com>
CMD [ "bash" ]
```dockerfile
FROM mageia:4
MAINTAINER "Foo Bar" <foo@bar.com>
CMD [ "bash" ]
```
## Installed packages

View File

@ -19,7 +19,9 @@ The intent is also to maintain high compatibility with MySQL, ensuring a "drop-i
## start a `mariadb` server instance
docker run --name some-mariadb -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mariadb
```console
$ docker run --name some-mariadb -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mariadb
```
This image includes `EXPOSE 3306` (the standard MySQL port), so container linking will make it automatically available to the linked containers (as the following examples illustrate).
@ -27,11 +29,15 @@ This image includes `EXPOSE 3306` (the standard MySQL port), so container linkin
Since MariaDB is intended as a drop-in replacement for MySQL, it can be used with many applications.
docker run --name some-app --link some-mariadb:mysql -d application-that-uses-mysql
```console
$ docker run --name some-app --link some-mariadb:mysql -d application-that-uses-mysql
```
## ... or via `mysql`
docker run -it --link some-mariadb:mysql --rm mariadb sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```console
$ docker run -it --link some-mariadb:mysql --rm mariadb sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```
## Environment Variables

View File

@ -17,8 +17,10 @@ For more information about this image and its history, please see the [relevant
## Create a Dockerfile in your Maven project
FROM maven:3.2-jdk-7-onbuild
CMD ["do-something-with-built-packages"]
```dockerfile
FROM maven:3.2-jdk-7-onbuild
CMD ["do-something-with-built-packages"]
```
Put this file in the root of your project, next to the pom.xml.
@ -26,14 +28,18 @@ This image includes multiple ONBUILD triggers which should be all you need to bo
You can then build and run the image:
docker build -t my-maven .
docker run -it --name my-maven-script my-maven
```console
$ docker build -t my-maven .
$ docker run -it --name my-maven-script my-maven
```
## Run a single Maven command
For many simple projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Maven project by using the Maven Docker image directly, passing a Maven command to `docker run`:
docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven maven:3.2-jdk-7 mvn clean install
```console
$ docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven maven:3.2-jdk-7 mvn clean install
```
# Image Variants

View File

@ -14,17 +14,23 @@ Memcached's APIs provide a very large hash table distributed across multiple mac
# How to use this image
docker run --name my-memcache -d memcached
```console
$ docker run --name my-memcache -d memcached
```
Start your memcached container with the above command and then you can connect you app to it with standard linking:
docker run --link my-memcache:memcache -d my-app-image
```console
$ docker run --link my-memcache:memcache -d my-app-image
```
The memcached server information would then be available through the ENV variables generated by the link as well as through DNS as `memcache` from `/etc/hosts`.
How to set the memory usage for memcached
docker run --name my-memcache -d memcached memcached -m 64
```console
$ docker run --name my-memcache -d memcached memcached -m 64
```
This would set the memcache server to use use 64 megabytes for storage.

View File

@ -22,17 +22,23 @@ First developed by the software company 10gen (now MongoDB Inc.) in October 2007
## start a mongo instance
docker run --name some-mongo -d mongo
```console
$ docker run --name some-mongo -d mongo
```
This image includes `EXPOSE 27017` (the mongo port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## connect to it from an application
docker run --name some-app --link some-mongo:mongo -d application-that-uses-mongo
```console
$ docker run --name some-app --link some-mongo:mongo -d application-that-uses-mongo
```
## ... or via `mongo`
docker run -it --link some-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
```console
$ docker run -it --link some-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
```
## Configuration
@ -40,7 +46,9 @@ See the [official docs](http://docs.mongodb.org/manual/) for infomation on using
Just add the `--storageEngine` argument if you want to use the WiredTiger storage engine in MongoDB 3.0 and above without making a config file. Be sure to check the [docs](http://docs.mongodb.org/manual/release-notes/3.0-upgrade/#change-storage-engine-to-wiredtiger) on how to upgrade from older versions.
docker run --name some-mongo -d mongo --storageEngine=wiredTiger
```console
$ docker run --name some-mongo -d mongo --storageEngine=wiredTiger
```
## Where to Store Data
@ -56,13 +64,17 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `mongo` container like this:
docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo:tag
```console
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo:tag
```
The `-v /my/own/datadir:/data/db` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/data/db` inside the container, where MongoDB by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir
```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```
# License

View File

@ -28,8 +28,10 @@ This image will run stand-alone Mono console apps.
This example Dockerfile will run an executable called `TestingConsoleApp.exe`.
FROM mono:3.10-onbuild
CMD [ "mono", "./TestingConsoleApp.exe" ]
```dockerfile
FROM mono:3.10-onbuild
CMD [ "mono", "./TestingConsoleApp.exe" ]
```
Place this file in the root of your app, next to the `.sln` solution file. Modify the exectuable name to match what you want to run.
@ -37,8 +39,10 @@ This image includes `ONBUILD` triggers that adds your app source code to `/usr/s
With the Dockerfile in place, you can build and run a Docker image with your app:
docker build -t my-app .
docker run my-app
```console
$ docker build -t my-app .
$ docker run my-app
```
You should see any output from your app now.

View File

@ -20,7 +20,9 @@ For more information and related downloads for MySQL Server and other MySQL prod
Starting a MySQL instance is simple:
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
```console
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
```
... where `some-mysql` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags.
@ -28,13 +30,17 @@ Starting a MySQL instance is simple:
This image exposes the standard MySQL port (3306), so container linking makes the MySQL instance available to other application containers. Start your application container like this in order to link it to the MySQL container:
docker run --name some-app --link some-mysql:mysql -d app-that-uses-mysql
```console
$ docker run --name some-app --link some-mysql:mysql -d app-that-uses-mysql
```
## Connect to MySQL from the MySQL command line client
The following command starts another MySQL container instance and runs the `mysql` command line client against your original MySQL container, allowing you to execute SQL statements against your database instance:
docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```console
$ docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```
... where `some-mysql` is the name of your original MySQL Server container.
@ -44,11 +50,15 @@ More information about the MySQL command line client can be found in the [MySQL
The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `mysql` container:
docker exec -it some-mysql bash
```console
$ docker exec -it some-mysql bash
```
The MySQL Server log is available through Docker's container log:
docker logs some-mysql
```console
$ docker logs some-mysql
```
## Using a custom MySQL configuration file
@ -56,13 +66,17 @@ The MySQL startup configuration is specified in the file `/etc/mysql/my.cnf`, an
If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `mysql` container like this (note that only the directory path of the custom config file is used in this command):
docker run --name some-mysql -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
```console
$ docker run --name some-mysql -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
```
This will start a new container `some-mysql` where the MySQL instance uses the combined startup settings from `/etc/mysql/my.cnf` and `/etc/mysql/conf.d/config-file.cnf`, with settings from the latter taking precedence.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to your new config file so that the container will be allowed to mount it:
chcon -Rt svirt_sandbox_file_t /my/custom
```console
$ chcon -Rt svirt_sandbox_file_t /my/custom
```
## Environment Variables
@ -100,13 +114,17 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `mysql` container like this:
docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
```console
$ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
```
The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir
```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```
## No connections until MySQL init completes

View File

@ -32,10 +32,12 @@ The `neurodebian:latest` tag will always point the Neurodebian-enabled latest st
NeuroDebian APT file is installed under `/etc/apt/sources.list.d/neurodebian.sources.list` and currently enables only `main` (DFSG-compliant) area of the archive:
> docker run neurodebian:latest cat /etc/apt/sources.list.d/neurodebian.sources.list
deb http://neuro.debian.net/debian wheezy main
deb http://neuro.debian.net/debian data main
#deb-src http://neuro.debian.net/debian-devel wheezy main
```console
$ docker run neurodebian:latest cat /etc/apt/sources.list.d/neurodebian.sources.list
deb http://neuro.debian.net/debian wheezy main
deb http://neuro.debian.net/debian data main
#deb-src http://neuro.debian.net/debian-devel wheezy main
```
# Supported Docker versions

View File

@ -16,26 +16,36 @@ Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, H
## hosting some simple static content
docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
```console
$ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
```
Alternatively, a simple `Dockerfile` can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above):
FROM nginx
COPY static-html-directory /usr/share/nginx/html
```dockerfile
FROM nginx
COPY static-html-directory /usr/share/nginx/html
```
Place this file in the same directory as your directory of content ("static-html-directory"), run `docker build -t some-content-nginx .`, then start your container:
docker run --name some-nginx -d some-content-nginx
```console
$ docker run --name some-nginx -d some-content-nginx
```
## exposing the port
docker run --name some-nginx -d -p 8080:80 some-content-nginx
```console
$ docker run --name some-nginx -d -p 8080:80 some-content-nginx
```
Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browser.
## complex configuration
docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
```console
$ docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
```
For information on the syntax of the Nginx configuration files, see [the official documentation](http://nginx.org/en/docs/) (specifically the [Beginner's Guide](http://nginx.org/en/docs/beginners_guide.html#conf_structure)).
@ -43,16 +53,22 @@ Be sure to include `daemon off;` in your custom configuration to ensure that Ngi
If you wish to adapt the default configuration, use something like the following to copy it from a running Nginx container:
docker cp some-nginx:/etc/nginx/nginx.conf /some/nginx.conf
```console
$ docker cp some-nginx:/etc/nginx/nginx.conf /some/nginx.conf
```
As above, this can also be accomplished more cleanly using a simple `Dockerfile`:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
```dockerfile
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
```
Then, build with `docker build -t some-custom-nginx .` and run:
docker run --name some-nginx -d some-custom-nginx
```console
$ docker run --name some-nginx -d some-custom-nginx
```
# Supported Docker versions

View File

@ -31,14 +31,18 @@ Node.js internally uses the Google V8 JavaScript engine to execute code; a large
## Create a `Dockerfile` in your Node.js app project
FROM node:0.10-onbuild
# replace this with your application's default port
EXPOSE 8888
```dockerfile
FROM node:0.10-onbuild
# replace this with your application's default port
EXPOSE 8888
```
You can then build and run the Docker image:
docker build -t my-nodejs-app .
docker run -it --rm --name my-running-app my-nodejs-app
```console
$ docker build -t my-nodejs-app .
$ docker run -it --rm --name my-running-app my-nodejs-app
```
### Notes
@ -48,7 +52,9 @@ The image assumes that your application has a file named [`package.json`](https:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Node.js script by using the Node.js Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp node:0.10 node your-daemon-or-script.js
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp node:0.10 node your-daemon-or-script.js
```
# Image Variants

View File

@ -18,18 +18,24 @@ This image requires a running PostgreSQL server.
## Start a PostgreSQL server
docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres
```console
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres
```
## Start an Odoo instance
docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```console
$ docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```
The alias of the container running Postgres must be db for Odoo to be able to connect to the Postgres server.
## Stop and restart an Odoo instance
docker stop odoo
docker start -a odoo
```console
$ docker stop odoo
$ docker start -a odoo
```
## Stop and restart a PostgreSQL server
@ -41,24 +47,32 @@ Restarting a PostgreSQL server does not affect the created databases.
The default configuration file for the server (located at `/etc/odoo/openerp-server.conf`) can be overriden at startup using volumes. Suppose you have a custom configuration at `/path/to/config/openerp-server.conf`, then
docker run -v /path/to/config:/etc/odoo -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```console
$ docker run -v /path/to/config:/etc/odoo -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```
Please use [this configuration template](https://github.com/odoo/docker/blob/master/8.0/openerp-server.conf) to write your custom configuration as we already set some arguments for running Odoo inside a Docker container.
You can also directly specify Odoo arguments inline. Those arguments must be given after the keyword `--` in the command-line, as follows
docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo -- --dbfilter=odoo_db_.*
```console
$ docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo -- --dbfilter=odoo_db_.*
```
## Mount custom addons
You can mount your own Odoo addons within the Odoo container, at `/mnt/extra-addons`
docker run -v /path/to/addons:/mnt/extra-addons -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```console
$ docker run -v /path/to/addons:/mnt/extra-addons -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
```
## Run multiple Odoo instances
docker run -p 127.0.0.1:8070:8069 --name odoo2 --link db:db -t odoo
docker run -p 127.0.0.1:8071:8069 --name odoo3 --link db:db -t odoo
```console
$ docker run -p 127.0.0.1:8070:8069 --name odoo2 --link db:db -t odoo
$ docker run -p 127.0.0.1:8071:8069 --name odoo3 --link db:db -t odoo
```
Please note that for plain use of mails and reports functionalities, when the host and container ports differ (e.g. 8070 and 8069), one has to set, in Odoo, Settings->Parameters->System Parameters (requires technical features), web.base.url to the container port (e.g. 127.0.0.1:8069).
@ -68,7 +82,9 @@ Suppose you created a database from an Odoo instance named old-odoo, and you wan
By default, Odoo 8.0 uses a filestore (located at /var/lib/odoo/filestore/) for attachments. You should restore this filestore in your new Odoo instance by running
docker run --volumes-from old-odoo -p 127.0.0.1:8070:8069 --name new-odoo --link db:db -t odoo
```console
$ docker run --volumes-from old-odoo -p 127.0.0.1:8070:8069 --name new-odoo --link db:db -t odoo
```
You can also simply prevent Odoo from using the filestore by setting the system parameter `ir_attachment.location` to `db-storage` in Settings->Parameters->System Parameters (requires technical features).

View File

@ -21,7 +21,9 @@ ownCloud is a self-hosted file sync and share server. It provides access to your
Starting the ownCloud 8.1 instance listening on port 80 is as easy as the following:
docker run -d -p 80:80 owncloud:8.1
```console
$ docker run -d -p 80:80 owncloud:8.1
```
Then go to http://localhost/ and go through the wizard. By default this container uses SQLite for data storage, but the wizard should allow for connecting to an existing database. Additionally, tags for 6.0, 7.0, or 8.0 are available.

View File

@ -19,7 +19,9 @@ It aims to retain close compatibility to the official MySQL releases, while focu
## start a `percona` server instance
docker run --name some-percona -e MYSQL_ROOT_PASSWORD=mysecretpassword -d percona
```console
$ docker run --name some-percona -e MYSQL_ROOT_PASSWORD=mysecretpassword -d percona
```
This image includes `EXPOSE 3306` (the standard MySQL port), so container linking will make it automatically available to the linked containers (as the following examples illustrate).
@ -27,11 +29,15 @@ This image includes `EXPOSE 3306` (the standard MySQL port), so container linkin
Since Percona Server is intended as a drop-in replacement for MySQL, it can be used with many applications.
docker run --name some-app --link some-percona:mysql -d application-that-uses-mysql
```console
$ docker run --name some-app --link some-percona:mysql -d application-that-uses-mysql
```
## ... or via `mysql`
docker run -it --link some-percona:mysql --rm percona sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```console
$ docker run -it --link some-percona:mysql --rm percona sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
```
## Environment Variables

View File

@ -19,21 +19,27 @@ Perl is a high-level, general-purpose, interpreted, dynamic programming language
## Create a `Dockerfile` in your Perl app project
FROM perl:5.20
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "perl", "./your-daemon-or-script.pl" ]
```dockerfile
FROM perl:5.20
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "perl", "./your-daemon-or-script.pl" ]
```
Then, build and run the Docker image:
docker build -t my-perl-app .
docker run -it --rm --name my-running-app my-perl-app
```console
$ docker build -t my-perl-app .
$ docker run -it --rm --name my-running-app my-perl-app
```
## Run a single Perl script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Perl script by using the Perl Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp perl:5.20 perl your-daemon-or-script.pl
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp perl:5.20 perl your-daemon-or-script.pl
```
# License

View File

@ -27,21 +27,29 @@ Zend Server is shared on [Docker-Hub](https://registry.hub.docker.com/_/php-zend
- To start a Zend Server cluster, execute the following command for each cluster node:
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend php-zendserver
```console
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend php-zendserver
```
#### Launching the Container from Dockerfile
- From a local folder containing this repo's clone, execute the following command to generate the image. The **image-id** will be outputted:
$ docker build .
```console
$ docker build .
```
- To start a single Zend Server instance, execute:
$ docker run <image-id>
```console
$ docker run <image-id>
```
- To start a Zend Server cluster, execute the following command on each cluster node:
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
```console
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
```
#### Accessing Zend server

View File

@ -31,21 +31,27 @@ For PHP projects run through the command line interface (CLI), you can do the fo
### Create a `Dockerfile` in your PHP project
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
```dockerfile
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
```
Then, run the commands to build and run the Docker image:
docker build -t my-php-app .
docker run -it --rm --name my-running-app my-php-app
```console
$ docker build -t my-php-app .
$ docker run -it --rm --name my-running-app my-php-app
```
### Run a single PHP script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a PHP script by using the PHP Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:5.6-cli php your-script.php
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:5.6-cli php your-script.php
```
## With Apache
@ -53,19 +59,25 @@ More commonly, you will probably want to run PHP in conjunction with Apache http
### Create a `Dockerfile` in your PHP project
FROM php:5.6-apache
COPY src/ /var/www/html/
```dockerfile
FROM php:5.6-apache
COPY src/ /var/www/html/
```
Where `src/` is the directory containing all your php code. Then, run the commands to build and run the Docker image:
docker build -t my-php-app .
docker run -it --rm --name my-running-app my-php-app
```console
$ docker build -t my-php-app .
$ docker run -it --rm --name my-running-app my-php-app
```
We recommend that you add a custom `php.ini` configuration. `COPY` it into `/usr/local/etc/php` by adding one more line to the Dockerfile above and running the same commands to build and run:
FROM php:5.6-apache
COPY config/php.ini /usr/local/etc/php
COPY src/ /var/www/html/
```dockerfile
FROM php:5.6-apache
COPY config/php.ini /usr/local/etc/php
COPY src/ /var/www/html/
```
Where `src/` is the directory containing all your php code and `config/` contains your `php.ini` file.
@ -75,17 +87,19 @@ We provide two convenient scripts named `docker-php-ext-configure` and `docker-p
For example, if you want to have a PHP-FPM image with `iconv`, `mcrypt` and `gd` extensions, you can inherit the base image that you like, and write your own `Dockerfile` like this:
FROM php:5.6-fpm
# Install modules
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
&& docker-php-ext-install iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install gd
CMD ["php-fpm"]
```dockerfile
FROM php:5.6-fpm
# Install modules
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
&& docker-php-ext-install iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install gd
CMD ["php-fpm"]
```
Remember, you must install dependencies for your extensions manually. If an extension needs custom `configure` arguments, you can use the `docker-php-ext-configure` script like this example.
@ -93,7 +107,9 @@ Remember, you must install dependencies for your extensions manually. If an exte
If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
docker run -it --rm --name my-apache-php-app -v "$PWD":/var/www/html php:5.6-apache
```console
$ docker run -it --rm --name my-apache-php-app -v "$PWD":/var/www/html php:5.6-apache
```
# License

View File

@ -23,7 +23,9 @@ PostgreSQL implements the majority of the SQL:2011 standard, is ACID-compliant a
## start a postgres instance
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
```console
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
```
This image includes `EXPOSE 5432` (the postgres port), so standard container linking will make it automatically available to the linked containers. The default `postgres` user and database are created in the entrypoint with `initdb`.
@ -32,11 +34,15 @@ This image includes `EXPOSE 5432` (the postgres port), so standard container lin
## connect to it from an application
docker run --name some-app --link some-postgres:postgres -d application-that-uses-postgres
```console
$ docker run --name some-app --link some-postgres:postgres -d application-that-uses-postgres
```
## ... or via `psql`
docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
```console
$ docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
```
## Environment Variables
@ -60,9 +66,11 @@ If you would like to do additional initialization in an image derived from this
You can also extend the image with a simple `Dockerfile` to set the locale. The following example will set the default locale to `de_DE.utf8`:
FROM postgres:9.4
RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
ENV LANG de_DE.utf8
```dockerfile
FROM postgres:9.4
RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
ENV LANG de_DE.utf8
```
Since database initialization only happens on container startup, this allows us to set the language before it is created.

View File

@ -23,30 +23,40 @@ PyPy started out as a Python interpreter written in the Python language itself.
## Create a `Dockerfile` in your Python app project
FROM pypy:3-onbuild
CMD [ "pypy3", "./your-daemon-or-script.py" ]
```dockerfile
FROM pypy:3-onbuild
CMD [ "pypy3", "./your-daemon-or-script.py" ]
```
or (if you need to use PyPy 2):
FROM pypy:2-onbuild
CMD [ "pypy", "./your-daemon-or-script.py" ]
```dockerfile
FROM pypy:2-onbuild
CMD [ "pypy", "./your-daemon-or-script.py" ]
```
These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file,`RUN pip install` on said file, and then copy the current directory into`/usr/src/app`.
You can then build and run the Docker image:
docker build -t my-python-app .
docker run -it --rm --name my-running-app my-python-app
```console
$ docker build -t my-python-app .
$ docker run -it --rm --name my-running-app my-python-app
```
## Run a single Python script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:3 pypy3 your-daemon-or-script.py
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:3 pypy3 your-daemon-or-script.py
```
or (again, if you need to use Python 2):
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:2 pypy your-daemon-or-script.py
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:2 pypy your-daemon-or-script.py
```
# Image Variants

View File

@ -34,30 +34,40 @@ Python is an interpreted, interactive, object-oriented, open-source programming
## Create a `Dockerfile` in your Python app project
FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
```dockerfile
FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
```
or (if you need to use Python 2):
FROM python:2-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
```dockerfile
FROM python:2-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
```
These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file, `RUN pip install` on said file, and then copy the current directory into `/usr/src/app`.
You can then build and run the Docker image:
docker build -t my-python-app .
docker run -it --rm --name my-running-app my-python-app
```console
$ docker build -t my-python-app .
$ docker run -it --rm --name my-running-app my-python-app
```
## Run a single Python script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python your-daemon-or-script.py
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python your-daemon-or-script.py
```
or (again, if you need to use Python 2):
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2 python your-daemon-or-script.py
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2 python your-daemon-or-script.py
```
# Image Variants

View File

@ -24,35 +24,47 @@ R is a GNU project. The source code for the R software environment is written pr
Launch R directly for interactive work:
docker run -ti --rm r-base
```console
$ docker run -ti --rm r-base
```
## Batch mode
Link the working directory to run R batch commands. We recommend specifying a non-root user when linking a volume to the container to avoid permission changes, as illustrated here:
docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker r-base R CMD check .
```console
$ docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker r-base R CMD check .
```
Alternatively, just run a bash session on the container first. This allows a user to run batch commands and also edit and run scripts:
docker run -ti --rm r-base /usr/bin/bash
vim.tiny myscript.R
```console
$ docker run -ti --rm r-base /usr/bin/bash
$ vim.tiny myscript.R
```
Write the script in the container, exit `vim` and run `Rscript`
Rscript myscript.R
```console
$ Rscript myscript.R
```
## Dockerfiles
Use `r-base` as a base for your own Dockerfiles. For instance, something along the lines of the following will compile and run your project:
FROM r-base:latest
COPY . /usr/local/src/myscripts
WORKDIR /usr/local/src/myscripts
CMD ["Rscript", "myscript.R"]
```dockerfile
FROM r-base:latest
COPY . /usr/local/src/myscripts
WORKDIR /usr/local/src/myscripts
CMD ["Rscript", "myscript.R"]
```
Build your image with the command:
docker build -t myscript /path/to/Dockerfile
```console
$ docker build -t myscript /path/to/Dockerfile
```
Running this container with no command will execute the script. Alternatively, a user could run this container in interactive or batch mode as described above, instead of linking volumes.

View File

@ -19,7 +19,9 @@ RabbitMQ is open source message broker software (sometimes called message-orient
One of the important things to note about RabbitMQ is that it stores data based on what it calls the "Node Name", which defaults to the hostname. What this means for usage in Docker is that we should specify `-h`/`--hostname` explicitly for each daemon so that we don't get a random hostname and can keep track of our data:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
```console
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
```
If you give that a minute, then do `docker logs some-rabbit`, you'll see in the output a block similar to:
@ -40,31 +42,41 @@ See the [RabbitMQ "Clustering Guide"](https://www.rabbitmq.com/clustering.html#e
For setting a consistent cookie (especially useful for clustering but also for remote/cross-container administration via `rabbitmqctl`), use `RABBITMQ_ERLANG_COOKIE`:
docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3
```console
$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3
```
This can then be used from a separate instance to connect:
$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3 bash
root@f2a2d3d27c75:/# rabbitmqctl -n rabbit@my-rabbit list_users
Listing users ...
guest [administrator]
```console
$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3 bash
root@f2a2d3d27c75:/# rabbitmqctl -n rabbit@my-rabbit list_users
Listing users ...
guest [administrator]
```
Alternatively, one can also use `RABBITMQ_NODENAME` to make repeated `rabbitmqctl` invocations simpler:
$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit rabbitmq:3 bash
root@f2a2d3d27c75:/# rabbitmqctl list_users
Listing users ...
guest [administrator]
```console
$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit rabbitmq:3 bash
root@f2a2d3d27c75:/# rabbitmqctl list_users
Listing users ...
guest [administrator]
```
### Management Plugin
There is a second set of tags provided with the [management plugin](https://www.rabbitmq.com/management.html) installed and enabled by default, which is available on the standard management port of 15672, with the default username and password of `guest` / `guest`:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
```console
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
```
You can access it by visiting `http://container-ip:15672` in a browser or, if you need access outside the host, on port 8080:
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
```console
$ docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
```
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser.
@ -72,7 +84,9 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
If you wish to change the default username and password of `guest` / `guest`, you can do so with the `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` environmental variables:
docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management
```console
$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management
```
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser and use `user`/`password` to gain access to the management console
@ -80,11 +94,15 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
If you wish to change the default vhost, you can do so wiht the `RABBITMQ_DEFAULT_VHOST` environmental variables:
docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_VHOST=my_vhost rabbitmq:3-management
```console
$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_VHOST=my_vhost rabbitmq:3-management
```
## Connecting to the daemon
docker run --name some-app --link some-rabbit:rabbit -d application-that-uses-rabbitmq
```console
$ docker run --name some-app --link some-rabbit:rabbit -d application-that-uses-rabbitmq
```
# License

View File

@ -17,7 +17,9 @@ Ruby on Rails or, simply, Rails is an open source web application framework whic
## Create a `Dockerfile` in your Rails app project
FROM rails:onbuild
```dockerfile
FROM rails:onbuild
```
Put this file in the root of your app, next to the `Gemfile`.
@ -25,12 +27,16 @@ This image includes multiple `ONBUILD` triggers which should cover most applicat
You can then build and run the Docker image:
docker build -t my-rails-app .
docker run --name some-rails-app -d my-rails-app
```console
$ docker build -t my-rails-app .
$ docker run --name some-rails-app -d my-rails-app
```
You can test it by visiting `http://container-ip:3000` in a browser or, if you need access outside the host, on port 8080:
docker run --name some-rails-app -p 8080:3000 -d my-rails-app
```console
$ docker run --name some-rails-app -p 8080:3000 -d my-rails-app
```
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser.
@ -39,13 +45,17 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker
run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
```console
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
```
## Bootstrap a new Rails application
If you want to generate the scaffolding for a new Rails project, you can do the following:
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app rails rails new webapp
```console
$ docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app rails rails new webapp
```
This will create a sub-directory named `webapp` inside your current directory.

View File

@ -24,13 +24,17 @@ Perl 6 Language Documentation: [http://doc.perl6.org/](http://doc.perl6.org/)
Simply running a container with the image will launch a Perl 6 REPL:
$ docker run -it rakudo-star
> say 'Hello, Perl!'
Hello, Perl!
```console
$ docker run -it rakudo-star
> say 'Hello, Perl!'
Hello, Perl!
```
You can also provide perl6 command line switches to `docker run`:
$ docker run -it rakudo-star -e 'say "Hello!"'
```console
$ docker run -it rakudo-star -e 'say "Hello!"'
```
# Contributing/Getting Help

View File

@ -21,13 +21,17 @@ Redis is an open-source, networked, in-memory, key-value data store with optiona
## start a redis instance
docker run --name some-redis -d redis
```console
$ docker run --name some-redis -d redis
```
This image includes `EXPOSE 6379` (the redis port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## start with persistent storage
docker run --name some-redis -d redis redis-server --appendonly yes
```console
$ docker run --name some-redis -d redis redis-server --appendonly yes
```
If persistence is enabled, data is stored in the `VOLUME /data`, which can be used with `--volumes-from some-volume-container` or `-v /docker/host/dir:/data` (see [docs.docker volumes](http://docs.docker.com/userguide/dockervolumes/)).
@ -35,23 +39,31 @@ For more about Redis Persistence, see [http://redis.io/topics/persistence](http:
## connect to it from an application
docker run --name some-app --link some-redis:redis -d application-that-uses-redis
```console
$ docker run --name some-app --link some-redis:redis -d application-that-uses-redis
```
## ... or via `redis-cli`
docker run -it --link some-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
```console
$ docker run -it --link some-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
```
## Additionally, If you want to use your own redis.conf ...
You can create your own Dockerfile that adds a redis.conf from the context into /data/, like so.
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
```dockerfile
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
```
Alternatively, you can specify something along the same lines with `docker run` options.
docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
```console
$ docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
```
Where `/myredis/conf/` is a local directory containing your `redis.conf` file. Using this method means that there is no need for you to have a Dockerfile for your redis container.

View File

@ -21,7 +21,9 @@ Redmine is a free and open source, web-based project management and issue tracki
This is the simplest setup; just run redmine.
docker run -d --name some-redmine redmine
```console
$ docker run -d --name some-redmine redmine
```
> not for multi-user production use ([redmine wiki](http://www.redmine.org/projects/redmine/wiki/RedmineInstall#Supported-database-back-ends))
@ -33,15 +35,21 @@ Running Redmine with a database server is the recommened way.
- PostgreSQL
docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=redmine postgres
```console
$ docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=redmine postgres
```
- MySQL (replace `--link some-postgres:postgres` with `--link some-mysql:mysql` when running redmine)
docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=redmine mysql
```console
$ docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=redmine mysql
```
2. start redmine
docker run -d --name some-redmine --link some-postgres:postgres redmine
```console
$ docker run -d --name some-redmine --link some-postgres:postgres redmine
```
## Alternative Web Server
@ -63,13 +71,17 @@ The Docker documentation is a good starting point for understanding the differen
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
2. Start your `redmine` container like this:
docker run -d --name some-redmine -v /my/own/datadir:/usr/src/redmine/files --link some-postgres:postgres redmine
```console
$ docker run -d --name some-redmine -v /my/own/datadir:/usr/src/redmine/files --link some-postgres:postgres redmine
```
The `-v /my/own/datadir:/usr/src/redmine/files` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/usr/src/redmine/files` inside the container, where Redmine will store uploaded files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
chcon -Rt svirt_sandbox_file_t /my/own/datadir
```console
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
```
## Port Mapping

View File

@ -23,15 +23,17 @@ Older tags refer to the [deprecated registry](https://github.com/docker/docker-r
### Recommended: run the registry docker container
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=acme-docker \
-e STORAGE_PATH=/registry \
-e AWS_KEY=AKIAHSHB43HS3J92MXZ \
-e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
registry
```console
$ docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=acme-docker \
-e STORAGE_PATH=/registry \
-e AWS_KEY=AKIAHSHB43HS3J92MXZ \
-e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
registry
```
NOTE: The container will try to allocate the port 5000. If the port is already taken, find out which container is already using it by running `docker ps`.

View File

@ -23,14 +23,18 @@ The Robot Operating System (ROS) is a set of software libraries and tools that h
## Create a `Dockerfile` in your ROS app project
FROM ros:indigo
# place here your application's setup specifics
CMD [ "roslaunch", "my-ros-app my-ros-app.launch" ]
```dockerfile
FROM ros:indigo
# place here your application's setup specifics
CMD [ "roslaunch", "my-ros-app my-ros-app.launch" ]
```
You can then build and run the Docker image:
docker build -t my-ros-app .
docker run -it --rm --name my-running-app my-ros-app
```console
$ docker build -t my-ros-app .
$ docker run -it --rm --name my-running-app my-ros-app
```
## Deployment use cases
@ -56,7 +60,9 @@ ROS uses the `~/.ros/` directory for storing logs, and debugging info. If you wi
For example, if one wishes to use their own `.ros` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument:
docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros
```console
$ docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros
```
### Devices
@ -75,16 +81,20 @@ If we want our all ROS nodes to easily talk to each other, we'll can use a virtu
> Build a ROS image that includes ROS tutorials using this `Dockerfile:`
FROM ros:indigo-ros-base
# install ros tutorials packages
RUN apt-get update && apt-get install -y
ros-indigo-ros-tutorials \
ros-indigo-common-tutorials \
&& rm -rf /var/lib/apt/lists/
```dockerfile
FROM ros:indigo-ros-base
# install ros tutorials packages
RUN apt-get update && apt-get install -y
ros-indigo-ros-tutorials \
ros-indigo-common-tutorials \
&& rm -rf /var/lib/apt/lists/
```
> Then to build the image from within the same directory:
docker build --tag ros:ros-tutorials .
```console
$ docker build --tag ros:ros-tutorials .
```
#### Create network
@ -98,68 +108,84 @@ If we want our all ROS nodes to easily talk to each other, we'll can use a virtu
> To create a container for the ROS master and advertise it's service:
docker run -it --rm\
--publish-service=master.foo \
--name master \
ros:ros-tutorials \
roscore
```console
$ docker run -it --rm\
--publish-service=master.foo \
--name master \
ros:ros-tutorials \
roscore
```
> Now you can see that master is running and is ready manage our other ROS nodes. To add our `talker` node, we'll need to point the relevant environment variable to the master service:
docker run -it --rm\
--publish-service=talker.foo \
--env ROS_HOSTNAME=talker \
--env ROS_MASTER_URI=http://master:11311 \
--name talker \
ros:ros-tutorials \
rosrun roscpp_tutorials talker
```console
$ docker run -it --rm\
--publish-service=talker.foo \
--env ROS_HOSTNAME=talker \
--env ROS_MASTER_URI=http://master:11311 \
--name talker \
ros:ros-tutorials \
rosrun roscpp_tutorials talker
```
> Then in another terminal, run the `listener` node similarly:
docker run -it --rm\
--publish-service=listener.foo \
--env ROS_HOSTNAME=listener \
--env ROS_MASTER_URI=http://master:11311 \
--name listener \
ros:ros-tutorials \
rosrun roscpp_tutorials listener
```console
$ docker run -it --rm\
--publish-service=listener.foo \
--env ROS_HOSTNAME=listener \
--env ROS_MASTER_URI=http://master:11311 \
--name listener \
ros:ros-tutorials \
rosrun roscpp_tutorials listener
```
> Alright! You should see `listener` is now echoing each message the `talker` broadcasting. You can then list the containers and see something like this:
$ docker service ls
SERVICE ID NAME NETWORK CONTAINER
67ce73355e67 listener foo a62019123321
917ee622d295 master foo f6ab9155fdbe
7f5a4748fb8d talker foo e0da2ee7570a
```console
$ docker service ls
SERVICE ID NAME NETWORK CONTAINER
67ce73355e67 listener foo a62019123321
917ee622d295 master foo f6ab9155fdbe
7f5a4748fb8d talker foo e0da2ee7570a
```
> And for the services:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener
e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker
f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master
```console
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener
e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker
f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master
```
#### Introspection
> Ok, now that we see the two nodes are communicating, let get inside one of the containers and do some introspection what exactly the topics are:
docker exec -it master bash
source /ros_entrypoint.sh
```console
$ docker exec -it master bash
$ source /ros_entrypoint.sh
```
> If we then use `rostopic` to list published message topics, we should see something like this:
$ rostopic list
/chatter
/rosout
/rosout_agg
```console
$ rostopic list
/chatter
/rosout
/rosout_agg
```
#### Tear down
> To tear down the structure we've made, we just need to stop the containers and the services. We can stop and remove the containers using `Ctrl^C` where we launched the containers or using the stop command with the names we gave them:
docker stop master talker listener
docker rm master talker listener
```console
$ docker stop master talker listener
$ docker rm master talker listener
```
# More Resources

View File

@ -24,8 +24,10 @@ Ruby is a dynamic, reflective, object-oriented, general-purpose, open-source pro
## Create a `Dockerfile` in your Ruby app project
FROM ruby:2.1-onbuild
CMD ["./your-daemon-or-script.rb"]
```dockerfile
FROM ruby:2.1-onbuild
CMD ["./your-daemon-or-script.rb"]
```
Put this file in the root of your app, next to the `Gemfile`.
@ -34,20 +36,26 @@ bundle install`.
You can then build and run the Ruby image:
docker build -t my-ruby-app .
docker run -it --name my-running-script my-ruby-app
```console
$ docker build -t my-ruby-app .
$ docker run -it --name my-running-script my-ruby-app
```
### Generate a `Gemfile.lock`
The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
```console
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
```
## Run a single Ruby script
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb
```console
$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb
```
# Image Variants

View File

@ -18,25 +18,35 @@ Sentry is a realtime event logging and aggregation platform. It specializes in m
1. start a redis container
docker run -d --name some-redis redis
```console
$ docker run -d --name some-redis redis
```
2. start a database container:
- Postgres (recommended by upstream):
docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=sentry postgres
```console
$ docker run -d --name some-postgres -e POSTGRES_PASSWORD=secret -e POSTGRES_USER=sentry postgres
```
- MySQL (later steps assume PostgreSQL, replace the `--link some-postgres:postres` with `--link some-mysql:mysql`):
docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=sentry mysql
```console
$ docker run -d --name some-mysql -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_DATABASE=sentry mysql
```
3. now start up sentry server
docker run -d --name some-sentry --link some-redis:redis --link some-postgres:postgres sentry
```console
$ docker run -d --name some-sentry --link some-redis:redis --link some-postgres:postgres sentry
```
4. if this is a new database, you'll need to run `sentry upgrade`
docker run -it --rm --link some-postgres:postgres --link some-redis:redis sentry sentry upgrade
```console
$ docker run -it --rm --link some-postgres:postgres --link some-redis:redis sentry sentry upgrade
```
**Note: the `-it` is important as the initial upgrade will prompt to create an initial user and will fail without it**
@ -44,13 +54,17 @@ Sentry is a realtime event logging and aggregation platform. It specializes in m
- using the celery image:
docker run -d --name celery-beat --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery celery beat
docker run -d --name celery-worker1 --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery
```console
$ docker run -d --name celery-beat --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery celery beat
$ docker run -d --name celery-worker1 --link some-redis:redis -e CELERY_BROKER_URL=redis://redis celery
```
- using the celery bundled with sentry
docker run -d --name sentry-celery-beat --link some-redis:redis sentry sentry celery beat
docker run -d --name sentry-celery1 --link some-redis:redis sentry sentry celery worker
```console
$ docker run -d --name sentry-celery-beat --link some-redis:redis sentry sentry celery beat
$ docker run -d --name sentry-celery1 --link some-redis:redis sentry sentry celery worker
```
### port mapping
@ -60,7 +74,9 @@ If you'd like to be able to access the instance from the host without the contai
If you did not create a superuser during `sentry upgrade`, use the following to create one:
docker run -it --rm --link some-postgres:postgres sentry sentry createsuperuser
```console
$ docker run -it --rm --link some-postgres:postgres sentry sentry createsuperuser
```
# License

View File

@ -19,15 +19,19 @@ SonarQube is an open source platform for continuous inspection of code quality.
The server is started this way:
docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube:5.1
```console
$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube:5.1
```
To analyse a project:
$ On Linux:
mvn sonar:sonar
$ With boot2docker:
mvn sonar:sonar -Dsonar.host.url=http://$(boot2docker ip):9000 -Dsonar.jdbc.url="jdbc:h2:tcp://$(boot2docker ip)/sonar"
```console
$ On Linux:
mvn sonar:sonar
$ With boot2docker:
mvn sonar:sonar -Dsonar.host.url=http://$(boot2docker ip):9000 -Dsonar.jdbc.url="jdbc:h2:tcp://$(boot2docker ip)/sonar"
```
## Database configuration
@ -35,12 +39,14 @@ By default, the image will use an embedded H2 database that is not suited for pr
The production database is configured with these variables: `SONARQUBE_JDBC_USERNAME`, `SONARQUBE_JDBC_PASSWORD` and `SONARQUBE_JDBC_URL`.
docker run -d --name sonarqube \
```console
$ docker run -d --name sonarqube \
-p 9000:9000 -p 9092:9092 \
-e SONARQUBE_JDBC_USERNAME=sonar \
-e SONARQUBE_JDBC_PASSWORD=sonar \
-e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost/sonar \
sonarqube:5.1
```
More recipes can be found [here](https://github.com/SonarSource/docker-sonarqube/blob/master/recipes.md).

View File

@ -14,7 +14,9 @@ Read more about [Thrift](https://thrift.apache.org).
This is image is intended to run as an executable. Files are provided by mounting a directory. Here's an example of compiling `service.thrift` to ruby to the current directory.
docker run -v "$PWD:/data" thrift thrift -o /data --gen rb /data/service.thrift
```console
$ docker run -v "$PWD:/data" thrift thrift -o /data --gen rb /data/service.thrift
```
Note, that you may want to include `-u $(id -u)` to set the UID on generated files. The thrift process runs as root by default which will generate root owned files depending on your docker setup.

View File

@ -21,11 +21,15 @@ Apache Tomcat (or simply Tomcat) is an open source web server and servlet contai
Run the default Tomcat server (`CMD ["catalina.sh", "run"]`):
docker run -it --rm tomcat:8.0
```console
$ docker run -it --rm tomcat:8.0
```
You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888:
docker run -it --rm -p 8888:8080 tomcat:8.0
```console
$ docker run -it --rm -p 8888:8080 tomcat:8.0
```
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser.

View File

@ -23,45 +23,49 @@ Development of Ubuntu is led by UK-based Canonical Ltd., a company owned by Sout
### `ubuntu:14.04`
$ docker run ubuntu:14.04 grep -v '^#' /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ trusty main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty universe
deb http://archive.ubuntu.com/ubuntu/ trusty-updates universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates universe
deb http://archive.ubuntu.com/ubuntu/ trusty-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-security main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty-security universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty-security universe
```console
$ docker run ubuntu:14.04 grep -v '^#' /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ trusty main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty universe
deb http://archive.ubuntu.com/ubuntu/ trusty-updates universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates universe
deb http://archive.ubuntu.com/ubuntu/ trusty-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ trusty-security main restricted
deb http://archive.ubuntu.com/ubuntu/ trusty-security universe
deb-src http://archive.ubuntu.com/ubuntu/ trusty-security universe
```
### `ubuntu:12.04`
$ docker run ubuntu:12.04 cat /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise main restricted
deb http://archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb http://archive.ubuntu.com/ubuntu/ precise universe
deb-src http://archive.ubuntu.com/ubuntu/ precise universe
deb http://archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates universe
deb http://archive.ubuntu.com/ubuntu/ precise-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-security main restricted
deb http://archive.ubuntu.com/ubuntu/ precise-security universe
deb-src http://archive.ubuntu.com/ubuntu/ precise-security universe
```console
$ docker run ubuntu:12.04 cat /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise main restricted
deb http://archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb http://archive.ubuntu.com/ubuntu/ precise universe
deb-src http://archive.ubuntu.com/ubuntu/ precise universe
deb http://archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://archive.ubuntu.com/ubuntu/ precise-updates universe
deb http://archive.ubuntu.com/ubuntu/ precise-security main restricted
deb-src http://archive.ubuntu.com/ubuntu/ precise-security main restricted
deb http://archive.ubuntu.com/ubuntu/ precise-security universe
deb-src http://archive.ubuntu.com/ubuntu/ precise-security universe
```
# Supported Docker versions

View File

@ -15,13 +15,17 @@ In order to use the image, it is necessary to accept the terms of the WebSphere
The image is designed to support a number of different usage patterns. The following examples are based on the Liberty [application deployment sample](https://developer.ibm.com/wasdev/docs/article_appdeployment/) and assume that [DefaultServletEngine.zip](https://www.ibm.com/developerworks/mydeveloperworks/blogs/wasdev/resource/DefaultServletEngine.zip) has been extracted to `/tmp` and the `server.xml` updated to accept HTTP connections from outside of the container by adding the following element inside the `server` stanza:
<httpEndpoint host="*" httpPort="9080" httpsPort="-1"/>
```xml
<httpEndpoint host="*" httpPort="9080" httpsPort="-1"/>
```
1. The image contains a default server configuration that specifies the `webProfile-6.0` feature and exposes ports 9080 and 9443 for HTTP and HTTPS respectively. A WAR file can therefore be mounted in to the `dropins` directory of this server and run. The following example starts a container in the background running a WAR file from the host file system with the HTTP and HTTPS ports mapped to 80 and 443 respectively.
docker run -e LICENSE=accept -d -p 80:9080 -p 443:9443 \
-v /tmp/DefaultServletEngine/dropins/Sample1.war:/opt/ibm/wlp/usr/servers/defaultServer/dropins/Sample1.war \
websphere-liberty
```console
$ docker run -e LICENSE=accept -d -p 80:9080 -p 443:9443 \
-v /tmp/DefaultServletEngine/dropins/Sample1.war:/opt/ibm/wlp/usr/servers/defaultServer/dropins/Sample1.war \
websphere-liberty
```
Once the server has started, you can browse to http://localhost/Sample1/SimpleServlet on the Docker host.
@ -29,38 +33,49 @@ The image is designed to support a number of different usage patterns. The follo
2. For greater flexibility over configuration, it is possible to mount an entire server configuration directory from the host and then specify the server name as a parameter to the run command. Note that this particular example server configuration only provides HTTP access.
docker run -e LICENSE=accept -d -p 80:9080 \
-v /tmp/DefaultServletEngine:/opt/ibm/wlp/usr/servers/DefaultServletEngine \
websphere-liberty /opt/ibm/wlp/bin/server run DefaultServletEngine
```console
$ docker run -e LICENSE=accept -d -p 80:9080 \
-v /tmp/DefaultServletEngine:/opt/ibm/wlp/usr/servers/DefaultServletEngine \
websphere-liberty /opt/ibm/wlp/bin/server run DefaultServletEngine
```
3. It is also possible to build an application layer on top of this image using either the default server configuration or a new server configuration and, optionally, accept the license as part of that build. Here we have copied the `Sample1.war` from `/tmp/DefaultServletEngine/dropins` to the same directory as the following Dockerfile.
FROM websphere-liberty
ADD Sample1.war /opt/ibm/wlp/usr/servers/defaultServer/dropins/
ENV LICENSE accept
```dockerfile
FROM websphere-liberty
ADD Sample1.war /opt/ibm/wlp/usr/servers/defaultServer/dropins/
ENV LICENSE accept
```
This can then be built and run as follows:
docker build -t app .
docker run -d -p 80:9080 -p 443:9443 app
```console
$ docker build -t app .
$ docker run -d -p 80:9080 -p 443:9443 app
```
4. Lastly, it is possible to mount a data volume container containing the application and the server configuration on to the image. This has the benefit that it has no dependency on files from the host but still allows the application container to be easily re-mounted on a newer version of the application server image. The example assumes that you have copied the `/tmp/DefaultServletEngine` directory in to the same directory as the Dockerfile.
Build and run the data volume container:
FROM websphere-liberty
ADD DefaultServletEngine /opt/ibm/wlp/usr/servers/DefaultServletEngine
docker build -t app-image .
docker run -d -v /opt/ibm/wlp/usr/servers/DefaultServletEngine \
--name app app-image true
```dockerfile
FROM websphere-liberty
ADD DefaultServletEngine /opt/ibm/wlp/usr/servers/DefaultServletEngine
```
```console
$ docker build -t app-image .
$ docker run -d -v /opt/ibm/wlp/usr/servers/DefaultServletEngine \
--name app app-image true
```
Run the WebSphere Liberty image with the volumes from the data volume container mounted:
docker run -e LICENSE=accept -d -p 80:9080 \
--volumes-from app websphere-liberty \
/opt/ibm/wlp/bin/server run DefaultServletEngine
```console
$ docker run -e LICENSE=accept -d -p 80:9080 \
--volumes-from app websphere-liberty \
/opt/ibm/wlp/bin/server run DefaultServletEngine
```
# License

View File

@ -15,7 +15,9 @@ WordPress is a free and open source blogging tool and a content management syste
# How to use this image
docker run --name some-wordpress --link some-mysql:mysql -d wordpress
```console
$ docker run --name some-wordpress --link some-mysql:mysql -d wordpress
```
The following environment variables are also honored for configuring your WordPress instance:
@ -29,30 +31,36 @@ If the `WORDPRESS_DB_NAME` specified does not already exist on the given MySQL s
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
docker run --name some-wordpress --link some-mysql:mysql -p 8080:80 -d wordpress
```console
$ docker run --name some-wordpress --link some-mysql:mysql -p 8080:80 -d wordpress
```
Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
If you'd like to use an external database instead of a linked `mysql` container, specify the hostname and port with `WORDPRESS_DB_HOST` along with the password in `WORDPRESS_DB_PASSWORD` and the username in `WORDPRESS_DB_USER` (if it is something other than `root`):
docker run --name some-wordpress -e WORDPRESS_DB_HOST=10.1.2.3:3306 \
-e WORDPRESS_DB_USER=... -e WORDPRESS_DB_PASSWORD=... -d wordpress
```console
$ docker run --name some-wordpress -e WORDPRESS_DB_HOST=10.1.2.3:3306 \
-e WORDPRESS_DB_USER=... -e WORDPRESS_DB_PASSWORD=... -d wordpress
```
## ... via [`docker-compose`](https://github.com/docker/compose)
Example `docker-compose.yml` for `wordpress`:
wordpress:
image: wordpress
links:
- db:mysql
ports:
- 8080:80
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: example
```yaml
wordpress:
image: wordpress
links:
- db:mysql
ports:
- 8080:80
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: example
```
Run `docker-compose up`, wait for it to initialize completely, and visit `http://localhost:8080` or `http://host-ip:8080`.