FORMAT ALL THE THINGS

This commit is contained in:
Tianon Gravi 2015-02-12 11:49:46 -07:00
parent 34b38dc22d
commit a71fa247d9
108 changed files with 911 additions and 1881 deletions

View File

@ -2,11 +2,7 @@
%%TAGS%%
For more information about this image and its history, please see the [relevant
manifest file
(`library/%%REPO%%`)](https://github.com/docker-library/official-images/blob/master/library/%%REPO%%)
in the [`docker-library/official-images` GitHub
repo](https://github.com/docker-library/official-images).
For more information about this image and its history, please see the [relevant manifest file (`library/%%REPO%%`)](https://github.com/docker-library/official-images/blob/master/library/%%REPO%%) in the [`docker-library/official-images` GitHub repo](https://github.com/docker-library/official-images).
%%CONTENT%%%%LICENSE%%

103
README.md
View File

@ -1,59 +1,42 @@
# What is this?
This repository contains the docs for each of the Docker official images. See
[docker-library/official-images](https://github.com/docker-library/official-images)
for the configuration how the images are built. To see all of the official
images go to the
[hub](https://registry.hub.docker.com/repos/stackbrew/?&s=alphabetical).
This repository contains the docs for each of the Docker official images. See [docker-library/official-images](https://github.com/docker-library/official-images) for the configuration how the images are built. To see all of the official images go to the [hub](https://registry.hub.docker.com/repos/stackbrew/?&s=alphabetical).
# How do I add a new image's docs
- create a folder for my image: `mkdir myimage`
- create a `README-short.txt` (required, 100 char max)
- create a `content.md` (required, 80 col wrap)
- create a `license.md` (required, 80 col wrap)
- add a `logo.png` (recommended)
- edit `update.sh` as needed (see below)
- run `./update.sh myimage` to generate `myimage/README.md`
- create a folder for my image: `mkdir myimage`
- create a `README-short.txt` (required, 100 char max)
- create a `content.md` (required, 80 col wrap)
- create a `license.md` (required, 80 col wrap)
- add a `logo.png` (recommended)
- edit `update.sh` as needed (see below)
- run `./update.sh myimage` to generate `myimage/README.md`
# What are all these files?
## `update.sh`
This is the main script used to generate the `README.md` files for each image.
The generated file is committed along with the files used to generate it (see
below on what customizations are available). When a new image is added that is
not under the `docker-library` namespace on GitHub, a new entry must be added to
the `otherRepos` array in this script. Accepted arguments are which image(s)
you want to update and no arguments to update all of them.
This is the main script used to generate the `README.md` files for each image. The generated file is committed along with the files used to generate it (see below on what customizations are available). When a new image is added that is not under the `docker-library` namespace on GitHub, a new entry must be added to the `otherRepos` array in this script. Accepted arguments are which image(s) you want to update and no arguments to update all of them.
## `push.pl`
This is used by us to push the actual content of the READMEs to the Docker Hub
as special access is required to modify the Hub description contents.
This is used by us to push the actual content of the READMEs to the Docker Hub as special access is required to modify the Hub description contents.
## `generate-dockerfile-links-partial.sh`
This script is used by `update.sh` to create the "Supported tags and respective
`Dockerfile` links" section of each generated `README.md` from the information
in the [official-images `library/`
manifests](https://github.com/docker-library/official-images/tree/master/library).
This script is used by `update.sh` to create the "Supported tags and respective `Dockerfile` links" section of each generated `README.md` from the information in the [official-images `library/` manifests](https://github.com/docker-library/official-images/tree/master/library).
## `generate-repo-stub-readme.sh`
This is used to generate a simple `README.md` to put in the image's repo.
Argument is the name of the image, like `golang` and it then outputs the readme
to standard out.
This is used to generate a simple `README.md` to put in the image's repo. Argument is the name of the image, like `golang` and it then outputs the readme to standard out.
## `README-template.md` and `user-feedback.md`
These files are the templates used in building the `<image name>/README.md`
file, in combination with the individual image's files.
These files are the templates used in building the `<image name>/README.md` file, in combination with the individual image's files.
## folder `<image name>`
This is where all the partial and generated files for a given image reside, (ex:
`golang/`).
This is where all the partial and generated files for a given image reside, (ex: `golang/`).
## `<image name>/README.md`
@ -61,64 +44,48 @@ This file is generated using `update.sh`.
## `<image name>/content.md`
This file contains the main content of your readme. The basic parts you should
have are a "What Is" section and a "How To" section. See the doc on [Official
Repos](https://docs.docker.com/docker-hub/official_repos/#a-long-description)
for more information on long description. The issues and contribution section
is generated by the script but can be overridden. The following is a basic
layout:
This file contains the main content of your readme. The basic parts you should have are a "What Is" section and a "How To" section. See the doc on [Official Repos](https://docs.docker.com/docker-hub/official_repos/#a-long-description) for more information on long description. The issues and contribution section is generated by the script but can be overridden. The following is a basic layout:
# What is XYZ?
// about what the contained software is
%%LOGO%%
# How to use this image
// descriptions and examples of common use cases for the image
// make use of subsections as necessary
# What is XYZ?
// about what the contained software is
%%LOGO%%
# How to use this image
// descriptions and examples of common use cases for the image
// make use of subsections as necessary
## `<image name>/README-short.txt`
This is the short description for the docker hub, limited to 100 characters in a
single line.
This is the short description for the docker hub, limited to 100 characters in a single line.
> Go (golang) is a general purpose, higher-level, imperative programming language.
## `<image name>/logo.png`
Logo for the contained software. Specifications can be found in the docs on
[Official Repos](https://docs.docker.com/docker-hub/official_repos/#a-logo)
Logo for the contained software. Specifications can be found in the docs on [Official Repos](https://docs.docker.com/docker-hub/official_repos/#a-logo)
## `<image name>/license.md`
This file should contain a link to the license for the main software in the
image, wrapped to 80 columns. Here is an example for golang:
This file should contain a link to the license for the main software in the image, wrapped to 80 columns. Here is an example for `golang`:
View [license information](http://golang.org/LICENSE)
for the software contained in this image.
View [license information](http://golang.org/LICENSE)
for the software contained in this image.
## `<image name>/user-feedback.md`
This file is an optional override of the default `user-feedback.md` for those
repositories with different issue and contributing policies.
This file is an optional override of the default `user-feedback.md` for those repositories with different issue and contributing policies.
## `<image name>/mailing-list.md`
This file is snippet that gets inserted into the user feedback section to
provide and extra way to get help, like a mailing list. Here is an example from
the Postgres image:
This file is snippet that gets inserted into the user feedback section to provide and extra way to get help, like a mailing list. Here is an example from the Postgres image:
on the [mailing list](http://www.postgresql.org/community/lists/subscribe/) or
on the [mailing list](http://www.postgresql.org/community/lists/subscribe/) or
# Issues and Contributing
If you would like to make a new Official Image, be sure to follow the
[guidelines](https://docs.docker.com/docker-hub/official_repos/) and talk to
partners@docker.com.
If you would like to make a new Official Image, be sure to follow the [guidelines](https://docs.docker.com/docker-hub/official_repos/) and talk to partners@docker.com.
Feel free to make a pull request for fixes and improvements to current
documentation. For questions or problems on this repo come talk to us via the
`#docker-library` IRC channel on [Freenode](https://freenode.net) or open up an
issue.
Feel free to make a pull request for fixes and improvements to current documentation. For questions or problems on this repo come talk to us via the `#docker-library` IRC channel on [Freenode](https://freenode.net) or open up an issue.

View File

@ -1,11 +1,6 @@
# What is `buildpack-deps`?
In spirit, `buildpack-deps` is similar to [Heroku's stack
images](https://github.com/heroku/stack-images/blob/master/bin/cedar.sh). It
includes a large number of "development header" packages needed by various
things like Ruby Gems, PyPI modules, etc. For example, `buildpack-deps` would
let you do a `bundle install` in an arbitrary application directory without
knowing beforehand that `ssl.h` is required to build a dependent module.
In spirit, `buildpack-deps` is similar to [Heroku's stack images](https://github.com/heroku/stack-images/blob/master/bin/cedar.sh). It includes a large number of "development header" packages needed by various things like Ruby Gems, PyPI modules, etc. For example, `buildpack-deps` would let you do a `bundle install` in an arbitrary application directory without knowing beforehand that `ssl.h` is required to build a dependent module.
%%LOGO%%
@ -15,25 +10,14 @@ This stack is designed to be the foundation of a language-stack image.
## What's included?
The main tags of this image are the full batteries-included approach. With
them, a majority of arbitrary `gem install` / `npm install` / `pip install`
should be successfull without additional header/development packages.
The main tags of this image are the full batteries-included approach. With them, a majority of arbitrary `gem install` / `npm install` / `pip install` should be successfull without additional header/development packages.
For some language stacks, that doesn't make sense, particularly if linking to
arbitrary external C libraries is much less common (as in Go and Java, for
example), which is where these other smaller variants can come in handy.
For some language stacks, that doesn't make sense, particularly if linking to arbitrary external C libraries is much less common (as in Go and Java, for example), which is where these other smaller variants can come in handy.
### `curl`
This variant includes just the `curl`, `wget`, and `ca-certificates` packages.
This is perfect for cases like the Java JRE, where downloading JARs is very
common and necessary, but checking out code isn't.
This variant includes just the `curl`, `wget`, and `ca-certificates` packages. This is perfect for cases like the Java JRE, where downloading JARs is very common and necessary, but checking out code isn't.
### `scm`
This variant is based on `curl`, but also adds various source control management
tools. As of this writing, the current list of included tools is `bzr`, `git`,
`hg`, and `svn`. Intentionally missing is `cvs` due to the dwindling relevance
it has (sorry CVS). This image is perfect for cases like the Java JDK, where
downloading JARs is very common (hence the `curl` base still), but checking out
code also becomes more common as well (compared to the JRE).
This variant is based on `curl`, but also adds various source control management tools. As of this writing, the current list of included tools is `bzr`, `git`, `hg`, and `svn`. Intentionally missing is `cvs` due to the dwindling relevance it has (sorry CVS). This image is perfect for cases like the Java JDK, where downloading JARs is very common (hence the `curl` base still), but checking out code also becomes more common as well (compared to the JRE).

View File

@ -1,2 +1 @@
View [license information](https://www.debian.org/social_contract#guidelines)
for the software contained in this image.
View [license information](https://www.debian.org/social_contract#guidelines) for the software contained in this image.

View File

@ -1,15 +1,8 @@
# What is BusyBox? The Swiss Army Knife of Embedded Linux
At about 2.5 Mb in size, [BusyBox](http://www.busybox.net/) is a very good
ingredient to craft space-efficient distributions.
At about 2.5 Mb in size, [BusyBox](http://www.busybox.net/) is a very good ingredient to craft space-efficient distributions.
BusyBox combines tiny versions of many common UNIX utilities into a single small
executable. It provides replacements for most of the utilities you usually find
in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer
options than their full-featured GNU cousins; however, the options that are
included provide the expected functionality and behave very much like their GNU
counterparts. BusyBox provides a fairly complete environment for any small or
embedded system.
BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system.
> [wikipedia.org/wiki/BusyBox](https://en.wikipedia.org/wiki/BusyBox)
@ -19,31 +12,20 @@ embedded system.
## Run BusyBox shell
docker run -it --rm busybox
docker run -it --rm busybox
This will drop you into an `sh` shell to allow you to do what you want inside a
BusyBox system.
This will drop you into an `sh` shell to allow you to do what you want inside a BusyBox system.
## Create a `Dockerfile` for a binary
FROM busybox
COPY ./my-static-binary /my-static-binary
CMD ["/my-static-binary"]
FROM busybox
COPY ./my-static-binary /my-static-binary
CMD ["/my-static-binary"]
This `Dockerfile` will allow you to create a minimal image for your statically
compiled binary. You will have to compile the binary in some other place like
another container.
This `Dockerfile` will allow you to create a minimal image for your statically compiled binary. You will have to compile the binary in some other place like another container.
## More about this image
The tags of this image are built using two different methods. The `ubuntu` tags
are using the `busybox-static` package from Ubuntu, adding a few support files
so that it works in Docker. It's super fast to build (a minute or even less).
The `buildroot` tags are going the long way: they use buildroot to craft a whole
filesystem, with busybox but also all required libraries and other support
files. It has a stronger guarantee of "this will work". It is also smaller
because it's using uclibc, however it takes hours to build.
The tags of this image are built using two different methods. The `ubuntu` tags are using the `busybox-static` package from Ubuntu, adding a few support files so that it works in Docker. It's super fast to build (a minute or even less). The `buildroot` tags are going the long way: they use buildroot to craft a whole filesystem, with busybox but also all required libraries and other support files. It has a stronger guarantee of "this will work". It is also smaller because it's using uclibc, however it takes hours to build.
Having two totally different builders means that if one of the goes belly up, we
can always fall-back on the other since this image is used in much of build
testing of `docker` itself.
Having two totally different builders means that if one of the goes belly up, we can always fall-back on the other since this image is used in much of build testing of `docker` itself.

View File

@ -1,2 +1 @@
View [license information](http://www.busybox.net/license.html)
for the software contained in this image.
View [license information](http://www.busybox.net/license.html) for the software contained in this image.

View File

@ -1,18 +1,6 @@
# CentOS
CentOS Linux is a community-supported distribution derived from sources
freely provided to the public by [Red Hat](ftp://ftp.redhat.com/pub/redhat/linux/enterprise/)
for Red Hat Enterprise Linux (RHEL). As such, CentOS Linux aims to be
functionally compatible with RHEL. The CentOS Project mainly changes
packages to remove upstream vendor branding and artwork. CentOS Linux
is no-cost and free to redistribute. Each CentOS Linux version is maintained
for up to 10 years (by means of security updates -- the duration of the
support interval by Red Hat has varied over time with respect to Sources
released). A new CentOS Linux version is released approximately every 2 years
and each CentOS Linux version is periodically updated (roughly every 6 months)
to support newer hardware. This results in a secure, low-maintenance,
reliable, predictable, and reproducible Linux environment.
CentOS Linux is a community-supported distribution derived from sources freely provided to the public by [Red Hat](ftp://ftp.redhat.com/pub/redhat/linux/enterprise/) for Red Hat Enterprise Linux (RHEL). As such, CentOS Linux aims to be functionally compatible with RHEL. The CentOS Project mainly changes packages to remove upstream vendor branding and artwork. CentOS Linux is no-cost and free to redistribute. Each CentOS Linux version is maintained for up to 10 years (by means of security updates -- the duration of the support interval by Red Hat has varied over time with respect to Sources released). A new CentOS Linux version is released approximately every 2 years and each CentOS Linux version is periodically updated (roughly every 6 months) to support newer hardware. This results in a secure, low-maintenance, reliable, predictable, and reproducible Linux environment.
> [wiki.centos.org](https://wiki.centos.org/FrontPage)
@ -20,91 +8,66 @@ reliable, predictable, and reproducible Linux environment.
# CentOS image documentation
The `centos:latest` tag is always the most recent version currently
available.
The `centos:latest` tag is always the most recent version currently available.
## Rolling builds
The CentOS Project offers regularly updated images for all active releases.
These images will be updated monthly or as needed for emergency fixes. These
rolling updates are tagged with the major version number only.
For example: `docker pull centos:6` or `docker pull centos:7`
The CentOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number only. For example: `docker pull centos:6` or `docker pull centos:7`
## Minor tags
Additionally, images with minor version tags that correspond to install media
are also offered. **These images DO NOT recieve updates** as they are intended
to match installation iso contents. If you choose to use these images it is
highly recommended that you include `RUN yum -y update && yum clean all`
in your Dockerfile, or otherwise address any potential security concerns.
To use these images, please specify the minor version tag:
Additionally, images with minor version tags that correspond to install media are also offered. **These images DO NOT recieve updates** as they are intended to match installation iso contents. If you choose to use these images it is highly recommended that you include `RUN yum -y update && yum clean all` in your Dockerfile, or otherwise address any potential security concerns. To use these images, please specify the minor version tag:
For example: `docker pull centos:5.11` or `docker pull centos:6.6`
# Package documentation
By default, the CentOS containers are built using yum's `nodocs` option, which
helps reduce the size of the image. If you install a package and discover
files missing, please comment out the line `tsflags=nodocs` in `/etc/yum.conf`
and reinstall your package.
By default, the CentOS containers are built using yum's `nodocs` option, which helps reduce the size of the image. If you install a package and discover files missing, please comment out the line `tsflags=nodocs` in `/etc/yum.conf` and reinstall your package.
# Systemd integration
Currently, systemd in CentOS 7 has been removed and replaced with a
`fakesystemd` package for dependency resolution. This is due to systemd
requiring the `CAP_SYS_ADMIN` capability, as well as being able to read
the host's cgroups. If you wish to replace the fakesystemd package and
use systemd normally, please follow the steps below.
Currently, systemd in CentOS 7 has been removed and replaced with a `fakesystemd` package for dependency resolution. This is due to systemd requiring the `CAP_SYS_ADMIN` capability, as well as being able to read the host's cgroups. If you wish to replace the fakesystemd package and use systemd normally, please follow the steps below.
## Dockerfile for systemd base image
FROM centos:7
MAINTAINER "you" <your@email.here>
ENV container docker
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i ==
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
FROM centos:7
MAINTAINER "you" <your@email.here>
ENV container docker
RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i ==
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
This Dockerfile swaps out fakesystemd for the real package, but deletes a
number of unit files which might cause issues. From here, you are ready
to build your base image.
This Dockerfile swaps out fakesystemd for the real package, but deletes a number of unit files which might cause issues. From here, you are ready to build your base image.
docker build --rm -t local/c7-systemd .
docker build --rm -t local/c7-systemd .
## Example systemd enabled app container
In order to use the systemd enabled base container created above, you will
need to create your `Dockerfile` similar to the one below.
In order to use the systemd enabled base container created above, you will need to create your `Dockerfile` similar to the one below.
FROM local/c7-systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
FROM local/c7-systemd
RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
EXPOSE 80
CMD ["/usr/sbin/init"]
Build this image:
docker build --rm -t local/c7-systemd-httpd
docker build --rm -t local/c7-systemd-httpd
## Running a systemd enabled app container
In order to run a container with systemd, you will need to use the
`--privileged` option mentioned earlier, as well as mounting the cgroups
volumes from the host. Below is an example command that will run the
systemd enabled httpd container created earlier.
In order to run a container with systemd, you will need to use the `--privileged` option mentioned earlier, as well as mounting the cgroups volumes from the host. Below is an example command that will run the systemd enabled httpd container created earlier.
docker run --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
docker run --privileged -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
This container is running with systemd in a limited context, but it must
always be run as a privileged container with the cgroups filesystem mounted.
This container is running with systemd in a limited context, but it must always be run as a privileged container with the cgroups filesystem mounted.

View File

@ -1,9 +1,6 @@
# What is Clojure?
Clojure is a dialect of the Lisp programming language. It is a general-purpose
programming language with an emphasis on functional programming. It runs on the
Java Virtual Machine, Common Langauge Runtime, and JavaScript engines. Like
other Lisps, Clojure treats code as data and has a macro system.
Clojure is a dialect of the Lisp programming language. It is a general-purpose programming language with an emphasis on functional programming. It runs on the Java Virtual Machine, Common Langauge Runtime, and JavaScript engines. Like other Lisps, Clojure treats code as data and has a macro system.
> [wikipedia.org/wiki/Clojure](http://en.wikipedia.org/wiki/Clojure)
@ -13,50 +10,37 @@ other Lisps, Clojure treats code as data and has a macro system.
## Start a Lein/Clojure instance in your app
Since the most common way to use Clojure is in conjunction with [Leiningen
(`lein`)](http://leiningen.org/), this image assumes that's how you'll be
working. The most straightforward way to use this image is to add a `Dockerfile`
to an existing Leiningen/Clojure project:
Since the most common way to use Clojure is in conjunction with [Leiningen (`lein`)](http://leiningen.org/), this image assumes that's how you'll be working. The most straightforward way to use this image is to add a `Dockerfile` to an existing Leiningen/Clojure project:
FROM clojure
COPY . /usr/src/app
WORKDIR /usr/src/app
CMD ["lein", "run"]
FROM clojure
COPY . /usr/src/app
WORKDIR /usr/src/app
CMD ["lein", "run"]
Then, run these commands to build and run the image:
docker build -t my-clojure-app .
docker run -it --rm --name my-running-app my-clojure-app
docker build -t my-clojure-app .
docker run -it --rm --name my-running-app my-clojure-app
While the above is the most straightforward example of a `Dockerfile`, it does
have some drawbacks. The `lein run` command will download your dependencies,
compile the project, and then run it. That's a lot of work, all of which you may
not want done every time you run the image. To get around this, you can download
the dependencies and compile the project ahead of time. This will significantly
reduce startup time when you run your image.
While the above is the most straightforward example of a `Dockerfile`, it does have some drawbacks. The `lein run` command will download your dependencies, compile the project, and then run it. That's a lot of work, all of which you may not want done every time you run the image. To get around this, you can download the dependencies and compile the project ahead of time. This will significantly reduce startup time when you run your image.
FROM clojure
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY project.clj /usr/src/app/
RUN lein deps
COPY . /usr/src/app
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar
CMD ["java", "-jar", "app-standalone.jar"]
FROM clojure
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY project.clj /usr/src/app/
RUN lein deps
COPY . /usr/src/app
RUN mv "$(lein uberjar | sed -n 's/^Created \(.*standalone\.jar\)/\1/p')" app-standalone.jar
CMD ["java", "-jar", "app-standalone.jar"]
Writing the `Dockerfile` this way will download the dependencies (and cache
them, so they are only re-downloaded when the dependencies change) and then
compile them into a standalone jar ahead of time rather than each time the image
is run.
Writing the `Dockerfile` this way will download the dependencies (and cache them, so they are only re-downloaded when the dependencies change) and then compile them into a standalone jar ahead of time rather than each time the image is run.
You can then build and run the image as above.
## Compile your Lein/Clojure project into a jar from within the container
If you have an existing Lein/Clojure project, it's fairly straightforward to
compile your project into a jar from a container:
If you have an existing Lein/Clojure project, it's fairly straightforward to compile your project into a jar from a container:
docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar
docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar
This will build your project into a jar file located in your project's
`target/uberjar` directory.
This will build your project into a jar file located in your project's `target/uberjar` directory.

View File

@ -1,2 +1 @@
View [license information](http://clojure.org/license)
for the software contained in this image.
View [license information](http://clojure.org/license) for the software contained in this image.

View File

@ -1,8 +1,6 @@
# What is Crate?
Crate is an Elastic SQL Data Store. Distributed by design, Crate makes
centralized database servers obsolete. Realtime non-blocking SQL engine with
full blown search. Highly available, massively scalable yet simple to use.
Crate is an Elastic SQL Data Store. Distributed by design, Crate makes centralized database servers obsolete. Realtime non-blocking SQL engine with full blown search. Highly available, massively scalable yet simple to use.
[Crate](https:/crate.io/)
@ -10,73 +8,59 @@ full blown search. Highly available, massively scalable yet simple to use.
## How to use this image
docker run -d -p 4200:4200 -p 4300:4300 crate:latest
docker run -d -p 4200:4200 -p 4300:4300 crate:latest
### Attach persistent data directory
docker run -d -p 4200:4200 -p 4300:4300 -v <data-dir>:/data crate
docker run -d -p 4200:4200 -p 4300:4300 -v <data-dir>:/data crate
### Use custom Crate configuration
docker run -d -p 4200:4200 -p 4300:4300 crate -Des.config=/path/to/crate.yml
docker run -d -p 4200:4200 -p 4300:4300 crate -Des.config=/path/to/crate.yml
Any configuration settings may be specified upon startup using the `-D` option
prefix. For example, configuring the cluster name by using system properties
will work this way:
Any configuration settings may be specified upon startup using the `-D` option prefix. For example, configuring the cluster name by using system properties will work this way:
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.cluster.name=cluster
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.cluster.name=cluster
For further configuration options please refer to the
[Configuration](https://crate.io/docs/stable/configuration.html) section of the
online documentation.
For further configuration options please refer to the [Configuration](https://crate.io/docs/stable/configuration.html) section of the online documentation.
### Environment
To set environment variables for Crate Data you need to use the `--env` option
when starting the docker image.
To set environment variables for Crate Data you need to use the `--env` option when starting the docker image.
For example, setting the heap size:
docker run -d -p 4200:4200 -p 4300:4300 --env CRATE_HEAP_SIZE=32g crate
docker run -d -p 4200:4200 -p 4300:4300 --env CRATE_HEAP_SIZE=32g crate
## Multicast
Crate uses multicast for node discovery by default. However, Docker does only
support multicast on the same host. This means that nodes that are started on
the same host will discover each other automatically, but nodes that are started
on different hosts need unicast enabled.
Crate uses multicast for node discovery by default. However, Docker does only support multicast on the same host. This means that nodes that are started on the same host will discover each other automatically, but nodes that are started on different hosts need unicast enabled.
You can enable unicast in your custom `crate.yml`.
See also: [Crate Multi Node
Setup](https://crate.io/docs/en/latest/best_practice/multi_node_setup.html).
You can enable unicast in your custom `crate.yml`. See also: [Crate Multi Node Setup](https://crate.io/docs/en/latest/best_practice/multi_node_setup.html).
Due to its architecture, Crate publishes the host it runs on for discovery
within the cluster. Since the address of the host inside the docker container
differs from the actual host the docker image is running on, you need to tell
Crate to publish the address of the docker host for discovery.
Due to its architecture, Crate publishes the host it runs on for discovery within the cluster. Since the address of the host inside the docker container differs from the actual host the docker image is running on, you need to tell Crate to publish the address of the docker host for discovery.
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.network.publish_host=host1.example.com:
docker run -d -p 4200:4200 -p 4300:4300 crate crate -Des.network.publish_host=host1.example.com:
If you change the transport port from the default `4300` to something else,
you also need to pass the publish port to Crate.
If you change the transport port from the default `4300` to something else, you also need to pass the publish port to Crate.
docker run -d -p 4200:4200 -p 4321:4300 crate crate -Des.transport.publish_port=4321
docker run -d -p 4200:4200 -p 4321:4300 crate crate -Des.transport.publish_port=4321
### Example Usage in a Multinode Setup
HOSTS='crate1.example.com:4300,crate2.example.com:4300,crate3.example.com:4300'
HOST=crate1.example.com
docker run -d \
-p 4200:4200 \
-p 4300:4300 \
--name node1 \
--volume /mnt/data:/data \
--env CRATE_HEAP_SIZE=8g \
crate:latest \
crate -Des.cluster.name=cratecluster \
-Des.node.name=crate1 \
-Des.transport.publish_port=4300 \
-Des.network.publish_host=$HOST \
-Des.multicast.enabled=false \
-Des.discovery.zen.ping.unicast.hosts=$HOSTS \
-Des.discovery.zen.minimum_master_nodes=2
HOSTS='crate1.example.com:4300,crate2.example.com:4300,crate3.example.com:4300'
HOST=crate1.example.com
docker run -d \
-p 4200:4200 \
-p 4300:4300 \
--name node1 \
--volume /mnt/data:/data \
--env CRATE_HEAP_SIZE=8g \
crate:latest \
crate -Des.cluster.name=cratecluster \
-Des.node.name=crate1 \
-Des.transport.publish_port=4300 \
-Des.network.publish_host=$HOST \
-Des.multicast.enabled=false \
-Des.discovery.zen.ping.unicast.hosts=$HOSTS \
-Des.discovery.zen.minimum_master_nodes=2

View File

@ -1,3 +1 @@
View [license
information](https://github.com/crate/crate/blob/master/LICENSE.txt) for the
software contained in this image.
View [license information](https://github.com/crate/crate/blob/master/LICENSE.txt) for the software contained in this image.

View File

@ -1,19 +1,11 @@
## Issues
If you have any problems with, or questions about this image, please contact us
through a [GitHub issue](https://github.com/crate/docker-crate/issues).
If you have any problems with, or questions about this image, please contact us through a [GitHub issue](https://github.com/crate/docker-crate/issues).
If you have any questions or suggestions we would be very happy to help you. So,
feel free to swing by our IRC channel `#crate` on
[Freenode](http://freenode.net).
If you have any questions or suggestions we would be very happy to help you. So, feel free to swing by our IRC channel `#crate` on [Freenode](http://freenode.net).
For further information and official contact please visit
[https://crate.io](https://crate.io).
For further information and official contact please visit [https://crate.io](https://crate.io).
## Contributing
You are very welcome to contribute features or fixes! Before we can accept any
pull requests to Crate Data we need you to agree to our
[CLA](https://crate.io/community/contribute/). For further information please
refer to
[CONTRIBUTING.rst](https://github.com/crate/crate/blob/master/CONTRIBUTING.rst).
You are very welcome to contribute features or fixes! Before we can accept any pull requests to Crate Data we need you to agree to our [CLA](https://crate.io/community/contribute/). For further information please refer to [CONTRIBUTING.rst](https://github.com/crate/crate/blob/master/CONTRIBUTING.rst).

View File

@ -1,36 +1,15 @@
# What is CRUX?
CRUX is a lightweight Linux distribution for the x86-64 architecture targeted at
experienced Linux users. The primary focus of this distribution is "keep it
simple", which it reflects in a simple tar.gz-based package system, BSD-style
initscripts, and a relatively small collection of trimmed packages. The
secondary focus is utilization of new Linux features and recent tools and
libraries. CRUX also has a ports system which makes it easy to install and
upgrade applications.
CRUX is a lightweight Linux distribution for the x86-64 architecture targeted at experienced Linux users. The primary focus of this distribution is "keep it simple", which it reflects in a simple tar.gz-based package system, BSD-style initscripts, and a relatively small collection of trimmed packages. The secondary focus is utilization of new Linux features and recent tools and libraries. CRUX also has a ports system which makes it easy to install and upgrade applications.
# Why use CRUX?
There are many Linux distributions out there these days, so what makes CRUX any
better than the others? The choice of distribution is a matter of taste, really.
Here are a few hints about the tastes and goals of the people behind CRUX. CRUX
is made with simplicity in mind from beginning to end.
There are many Linux distributions out there these days, so what makes CRUX any better than the others? The choice of distribution is a matter of taste, really. Here are a few hints about the tastes and goals of the people behind CRUX. CRUX is made with simplicity in mind from beginning to end.
Making it easy to create new and update old packages is essential; updating a
package in CRUX is often just a matter of typing `pkgmk -d -u`. The usage of
ports helps keep your packages up-to-date; not the latest bleeding-edge-alpha
version, but the latest stable version. Other features include creating packages
optimized for your processor, eg. by compiling with `-march=x86-64`, and
avoiding cluttering the filesystem with files you'll never use, eg.
`/usr/doc/*`, etc. If you need more information about a specific program, other
than information found in the man-page, Google usually knows all about it.
Making it easy to create new and update old packages is essential; updating a package in CRUX is often just a matter of typing `pkgmk -d -u`. The usage of ports helps keep your packages up-to-date; not the latest bleeding-edge-alpha version, but the latest stable version. Other features include creating packages optimized for your processor, eg. by compiling with `-march=x86-64`, and avoiding cluttering the filesystem with files you'll never use, eg. `/usr/doc/*`, etc. If you need more information about a specific program, other than information found in the man-page, Google usually knows all about it.
Finally, it strives to use new features as they become available, as long as
they are consistent with the rest of the goals. In short, CRUX might suit you
very well if you are:
Finally, it strives to use new features as they become available, as long as they are consistent with the rest of the goals. In short, CRUX might suit you very well if you are:
* A somewhat experienced Linux user who wants a clean and solid Linux
distribution as the foundation of your installation.
* A person who prefers editing configuration files with an editor to using a
GUI.
* Someone who does not hesitate to download and compile programs from the
source.
- A somewhat experienced Linux user who wants a clean and solid Linux distribution as the foundation of your installation.
- A person who prefers editing configuration files with an editor to using a GUI.
- Someone who does not hesitate to download and compile programs from the source.

26
debian/content.md vendored
View File

@ -1,10 +1,6 @@
# What is Debian?
Debian is an operating system which is composed primarily of free and
open-source software, most of which is under the GNU General Public License, and
developed by a group of individuals known as the Debian project. Debian is one
of the most popular Linux distributions for personal computers and network
servers, and has been used as a base for several other Linux distributions.
Debian is an operating system which is composed primarily of free and open-source software, most of which is under the GNU General Public License, and developed by a group of individuals known as the Debian project. Debian is one of the most popular Linux distributions for personal computers and network servers, and has been used as a base for several other Linux distributions.
> [wikipedia.org/wiki/Debian](https://en.wikipedia.org/wiki/Debian)
@ -12,22 +8,16 @@ servers, and has been used as a base for several other Linux distributions.
# About this image
The `debian:latest` tag will always point the latest stable release (which is,
at the time of this writing, `debian:wheezy`). Stable releases are also tagged
with their version (ie, `debian:wheezy` is currently also the same as
`debian:7.4`).
The `debian:latest` tag will always point the latest stable release (which is, at the time of this writing, `debian:wheezy`). Stable releases are also tagged with their version (ie, `debian:wheezy` is currently also the same as `debian:7.4`).
The rolling tags (`debian:stable`, `debian:testing`, etc) use the rolling suite
names in their `/etc/apt/sources.list` file (ie, `deb
The rolling tags (`debian:stable`, `debian:testing`, etc) use the rolling suite names in their `/etc/apt/sources.list` file (ie, `deb
http://http.debian.net/debian testing main`).
## sources.list
The mirror of choice for these images is
[http.debian.net](http://http.debian.net) so that it's as close to optimal for
everyone as possible, regardless of location.
The mirror of choice for these images is [http.debian.net](http://http.debian.net) so that it's as close to optimal for everyone as possible, regardless of location.
$ docker run debian:wheezy cat /etc/apt/sources.list
deb http://http.debian.net/debian wheezy main
deb http://http.debian.net/debian wheezy-updates main
deb http://security.debian.org/ wheezy/updates main
$ docker run debian:wheezy cat /etc/apt/sources.list
deb http://http.debian.net/debian wheezy main
deb http://http.debian.net/debian wheezy-updates main
deb http://security.debian.org/ wheezy/updates main

View File

@ -1,9 +1,6 @@
# What is Django?
Django is a free and open source web application framework, written in Python,
which follows the model-view-controller architectural pattern. Django's primary
goal is to ease the creation of complex, database-driven websites with an
emphasis on reusability and "pluggability" of components.
Django is a free and open source web application framework, written in Python, which follows the model-view-controller architectural pattern. Django's primary goal is to ease the creation of complex, database-driven websites with an emphasis on reusability and "pluggability" of components.
> [wikipedia.org/wiki/Django_(web_framework)](https://en.wikipedia.org/wiki/Django_%28web_framework%29)
@ -13,37 +10,31 @@ emphasis on reusability and "pluggability" of components.
## Create a `Dockerfile` in your Django app project
FROM django:onbuild
FROM django:onbuild
Put this file in the root of your app, next to the `requirements.txt`.
This image includes multiple `ONBUILD` triggers which should cover most
applications. The build will `COPY . /usr/src/app`, `RUN pip install`,
`EXPOSE 8000`, and set the default command to `python manage.py runserver`.
This image includes multiple `ONBUILD` triggers which should cover most applications. The build will `COPY . /usr/src/app`, `RUN pip install`, `EXPOSE 8000`, and set the default command to `python manage.py runserver`.
You can then build and run the Docker image:
docker build -t my-django-app .
docker run --name some-django-app -d my-django-app
docker build -t my-django-app .
docker run --name some-django-app -d my-django-app
You can test it by visiting `http://container-ip:8000` in a browser or, if you
need access outside the host, on `http://localhost:8000` with the following command:
You can test it by visiting `http://container-ip:8000` in a browser or, if you need access outside the host, on `http://localhost:8000` with the following command:
docker run --name some-django-app -p 8000:8000 -d my-django-app
docker run --name some-django-app -p 8000:8000 -d my-django-app
## Without a `Dockerfile`
Of course, if you don't want to take advantage of magical and convenient
`ONBUILD` triggers, you can always just use `docker run` directly to avoid
having to add a `Dockerfile` to your project.
Of course, if you don't want to take advantage of magical and convenient `ONBUILD` triggers, you can always just use `docker run` directly to avoid having to add a `Dockerfile` to your project.
docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000 -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
docker run --name some-django-app -v "$PWD":/usr/src/app -w /usr/src/app -p 8000:8000 -d django bash -c "pip install -r requirements.txt && python manage.py runserver 0.0.0.0:8000"
## Bootstrap a new Django Application
If you want to generate the scaffolding for a new Django project, you can do the
following:
If you want to generate the scaffolding for a new Django project, you can do the following:
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app django django-admin.py startproject mysite
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app django django-admin.py startproject mysite
This will create a sub-directory named `mysite` inside your current directory.

View File

@ -1,2 +1 @@
View [license information](https://github.com/django/django/blob/master/LICENSE)
for the software contained in this image.
View [license information](https://github.com/django/django/blob/master/LICENSE) for the software contained in this image.

View File

@ -1,17 +1,11 @@
# What is Docker?
Docker is an open-source project that automates the deployment of applications
inside software containers, by providing an additional layer of abstraction and
automation of operating systemlevel virtualization on Linux. Docker uses
resource isolation features of the Linux kernel such as cgroups and kernel
namespaces to allow independent "containers" to run within a single Linux
instance, avoiding the overhead of starting virtual machines.
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating systemlevel virtualization on Linux. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.
> [wikipedia.org/wiki/Docker_(software)](https://en.wikipedia.org/wiki/Docker_(software))
> [wikipedia.org/wiki/Docker_(software)](https://en.wikipedia.org/wiki/Docker_%28software%29)
%%LOGO%%
# About this image
This image contains the building and testing environment of the Docker project
itself, from which the official releases are made.
This image contains the building and testing environment of the Docker project itself, from which the official releases are made.

View File

@ -1,8 +1,6 @@
# What is Elasticsearch?
Elasticsearch is a search server based on Lucene. It provides a distributed,
multitenant-capable full-text search engine with a RESTful web interface and
schema-free JSON documents.
Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents.
Elasticsearch is a registered trademark of Elasticsearch BV.
@ -14,20 +12,16 @@ Elasticsearch is a registered trademark of Elasticsearch BV.
You can run the default `elasticsearch` command simply:
docker run -d elasticsearch
docker run -d elasticsearch
You can also pass in additional flags to `elasticsearch`:
docker run -d elasticsearch elasticsearch -Des.node.name="TestNode"
docker run -d elasticsearch elasticsearch -Des.node.name="TestNode"
This image comes with a default set of configuration files for `elasticsearch`,
but if you want to provide your own set of configuration files, you can do so
via a volume mounted at `/usr/share/elasticsearch/config`:
This image comes with a default set of configuration files for `elasticsearch`, but if you want to provide your own set of configuration files, you can do so via a volume mounted at `/usr/share/elasticsearch/config`:
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
docker run -d -v "$PWD/config":/usr/share/elasticsearch/config elasticsearch
This image is configured with a volume at `/usr/share/elasticsearch/data` to
hold the persisted index data. Use that path if you would like to keep the data
in a mounted volume:
This image is configured with a volume at `/usr/share/elasticsearch/data` to hold the persisted index data. Use that path if you would like to keep the data in a mounted volume:
docker run -d -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch
docker run -d -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch

View File

@ -1,22 +1,16 @@
# Fedora
This image serves as the `official Fedora image` for `Fedora 21` and as a
semi-official image for Fedora 20 (heisenbug) and rawhide.
This image serves as the `official Fedora image` for `Fedora 21` and as a semi-official image for Fedora 20 (heisenbug) and rawhide.
%%LOGO%%
The `fedora:latest` tag will always point to the latest stable
release, currently [Fedora 21](https://getfedora.org/). `fedora:latest` is
now the same as `fedora:21`.
The `fedora:latest` tag will always point to the latest stable release, currently [Fedora 21](https://getfedora.org/). `fedora:latest` is now the same as `fedora:21`.
Fedora rawhide is available via `fedora:rawhide` and Fedora 20 via
`fedora:20` and `fedora:heisenbug`.
Fedora rawhide is available via `fedora:rawhide` and Fedora 20 via `fedora:20` and `fedora:heisenbug`.
The metalink `http://mirrors.fedoraproject.org` is used to automatically select
a mirror site (both for building the image as well as for the yum repos in the
container image).
The metalink `http://mirrors.fedoraproject.org` is used to automatically select a mirror site (both for building the image as well as for the yum repos in the container image).
$ docker run fedora cat /etc/yum.repos.d/fedora.repo | grep metalink
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch
$ docker run fedora cat /etc/yum.repos.d/fedora.repo | grep metalink
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&arch=$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=fedora-source-$releasever&arch=$basearch

View File

@ -1,10 +1,6 @@
# What is GCC?
The GNU Compiler Collection (GCC) is a compiler system produced by the GNU
Project that supports various programming languages. GCC is a key component of
the GNU toolchain. The Free Software Foundation (FSF) distributes GCC under the
GNU General Public License (GNU GPL). GCC has played an important role in the
growth of free software, as both a tool and an example.
The GNU Compiler Collection (GCC) is a compiler system produced by the GNU Project that supports various programming languages. GCC is a key component of the GNU toolchain. The Free Software Foundation (FSF) distributes GCC under the GNU General Public License (GNU GPL). GCC has played an important role in the growth of free software, as both a tool and an example.
> [wikipedia.org/wiki/GNU_Compiler_Collection](https://en.wikipedia.org/wiki/GNU_Compiler_Collection)
@ -14,33 +10,25 @@ growth of free software, as both a tool and an example.
## Start a GCC instance running your app
The most straightforward way to use this image is to use a gcc container as both
the build and runtime environment. In your `Dockerfile`, writing something along
the lines of the following will compile and run your project:
The most straightforward way to use this image is to use a gcc container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM gcc:4.9
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN gcc -o myapp main.c
CMD ["./myapp"]
FROM gcc:4.9
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN gcc -o myapp main.c
CMD ["./myapp"]
Then, build and run the Docker image:
docker build -t my-gcc-app .
docker run -it --rm --name my-running-app my-gcc-app
docker build -t my-gcc-app .
docker run -it --rm --name my-running-app my-gcc-app
## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a
container. To compile, but not run your app inside the Docker instance, you can
write something like:
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c
This will add your current directory, as a volume, to the container, set the
working directory to the volume, and run the command `gcc -o myapp myapp.c.`
This tells gcc to compile the code in `myapp.c` and output the executable to
myapp. Alternatively, if you have a `Makefile`, you can instead run the `make`
command inside your container:
This will add your current directory, as a volume, to the container, set the working directory to the volume, and run the command `gcc -o myapp myapp.c.` This tells gcc to compile the code in `myapp.c` and output the executable to myapp. Alternatively, if you have a `Makefile`, you can instead run the `make` command inside your container:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make

View File

@ -1,2 +1 @@
View [license information](https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/COPYING3?view=markup)
for the software contained in this image.
View [license information](https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/COPYING3?view=markup) for the software contained in this image.

View File

@ -51,7 +51,7 @@ for repoDir in "${repoDirs[@]}"; do
url="https://$gitUrl/blob/$commit/${dir}Dockerfile"
echo '- ['"${repoDirTags["$repoDir"]}"' (*'"${dir}Dockerfile"'*)]('"$url"')'
echo $'-\t['"${repoDirTags["$repoDir"]}"' (*'"${dir}Dockerfile"'*)]('"$url"')'
done
echo

View File

@ -1,12 +1,8 @@
# What is Go?
Go (a.k.a., Golang) is a programming language first developed at Google. It is a
statically-typed language with syntax loosely derived from C, but with
additional features such as garbage collection, type safety, some dynamic-typing
capabilities, additional built-in types (e.g., variable-length arrays and
key-value maps), and a large standard library.
Go (a.k.a., Golang) is a programming language first developed at Google. It is a statically-typed language with syntax loosely derived from C, but with additional features such as garbage collection, type safety, some dynamic-typing capabilities, additional built-in types (e.g., variable-length arrays and key-value maps), and a large standard library.
> [wikipedia.org/wiki/Go_(programming_language)](http://en.wikipedia.org/wiki/Go_(programming_language))
> [wikipedia.org/wiki/Go_(programming_language)](http://en.wikipedia.org/wiki/Go_%28programming_language%29)
%%LOGO%%
@ -14,53 +10,41 @@ key-value maps), and a large standard library.
## Start a Go instance in your app
The most straightforward way to use this image is to use a Go container as both
the build and runtime environment. In your `Dockerfile`, writing something along
the lines of the following will compile and run your project:
The most straightforward way to use this image is to use a Go container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM golang:1.3-onbuild
FROM golang:1.3-onbuild
This image includes multiple `ONBUILD` triggers which should cover most
applications. The build will `COPY . /usr/src/app`, `RUN go get -d -v`, and `RUN
This image includes multiple `ONBUILD` triggers which should cover most applications. The build will `COPY . /usr/src/app`, `RUN go get -d -v`, and `RUN
go install -v`.
This image also includes the `CMD ["app"]` instruction which is the default command
when running the image without arguments.
This image also includes the `CMD ["app"]` instruction which is the default command when running the image without arguments.
You can then build and run the Docker image:
docker build -t my-golang-app .
docker run -it --rm --name my-running-app my-golang-app
docker build -t my-golang-app .
docker run -it --rm --name my-running-app my-golang-app
## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a
container. To compile, but not run your app inside the Docker instance, you can
write something like:
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 go build -v
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 go build -v
This will add your current directory as a volume to the container, set the
working directory to the volume, and run the command `go build` which will tell
go to compile the project in the working directory and output the executable to
`myapp`. Alternatively, if you have a `Makefile`, you can run the `make` command
inside your container.
This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `go build` which will tell go to compile the project in the working directory and output the executable to `myapp`. Alternatively, if you have a `Makefile`, you can run the `make` command inside your container.
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 make
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3 make
## Cross-compile your app inside the Docker container
If you need to compile your application for a platform other than `linux/amd64`
(such as `windows/386`), this can be easily accomplished with the provided
`cross` tags:
If you need to compile your application for a platform other than `linux/amd64` (such as `windows/386`), this can be easily accomplished with the provided `cross` tags:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp -e GOOS=windows -e GOARCH=386 golang:1.3-cross go build -v
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp -e GOOS=windows -e GOARCH=386 golang:1.3-cross go build -v
Alternatively, you can build for multiple platforms at once:
docker run --rm -it -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3-cross bash
$ for GOOS in darwin linux; do
> for GOARCH in 386 amd64; do
> go build -v -o myapp-$GOOS-$GOARCH
> done
> done
docker run --rm -it -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.3-cross bash
$ for GOOS in darwin linux; do
> for GOARCH in 386 amd64; do
> go build -v -o myapp-$GOOS-$GOARCH
> done
> done

View File

@ -1,2 +1 @@
View [license information](http://golang.org/LICENSE)
for the software contained in this image.
View [license information](http://golang.org/LICENSE) for the software contained in this image.

View File

@ -1,9 +1,6 @@
# What is HAProxy?
HAProxy is a free, open source high availability solution, providing load
balancing and proxying for TCP and HTTP-based applications by spreading requests
across multiple servers. It is written in C and has a reputation for being fast
and efficient (in terms of processor and memory usage).
HAProxy is a free, open source high availability solution, providing load balancing and proxying for TCP and HTTP-based applications by spreading requests across multiple servers. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage).
> [wikipedia.org/wiki/HAProxy](https://en.wikipedia.org/wiki/HAProxy)
@ -11,26 +8,22 @@ and efficient (in terms of processor and memory usage).
# How to use this image
Since no two users of HAProxy are likely to configure it exactly alike, this
image does not come with any default configuration.
Since no two users of HAProxy are likely to configure it exactly alike, this image does not come with any default configuration.
Please refer to [upstream's excellent (and comprehensive)
documentation](https://cbonte.github.io/haproxy-dconv/) on the subject of
configuring HAProxy for your needs.
Please refer to [upstream's excellent (and comprehensive) documentation](https://cbonte.github.io/haproxy-dconv/) on the subject of configuring HAProxy for your needs.
It is also worth checking out the [`examples/` directory from
upstream](http://www.haproxy.org/git?p=haproxy-1.5.git;a=tree;f=examples).
It is also worth checking out the [`examples/` directory from upstream](http://www.haproxy.org/git?p=haproxy-1.5.git;a=tree;f=examples).
## Create a `Dockerfile`
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Build and run:
docker build -t my-haproxy .
docker run -d --name my-running-haproxy my-haproxy
docker build -t my-haproxy .
docker run -d --name my-running-haproxy my-haproxy
## Directly via bind mount
docker run -d --name my-running-haproxy -v /path/to/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.5
docker run -d --name my-running-haproxy -v /path/to/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.5

View File

@ -1,2 +1 @@
View [license information](http://www.haproxy.org/download/1.5/doc/LICENSE) for
the software contained in this image.
View [license information](http://www.haproxy.org/download/1.5/doc/LICENSE) for the software contained in this image.

View File

@ -1,38 +1,10 @@
# What is Haskell?
[Haskell](http://www.haskell.org) is a
[lazy](http://en.wikibooks.org/wiki/Haskell/Laziness), functional,
statically-typed programming language with advanced type system features such as
higher-rank, higher-kinded parametric
[polymorphism](http://en.wikibooks.org/wiki/Haskell/Polymorphism), monadic
[effects](http://en.wikibooks.org/wiki/Haskell/Understanding_monads/IO),
generalized algebraic data types
([GADT](http://en.wikibooks.org/wiki/Haskell/GADT)s), flexible [type
classes](http://en.wikibooks.org/wiki/Haskell/Advanced_type_classes), associated
[type families](http://en.wikipedia.org/wiki/Type_family), and more.
[Haskell](http://www.haskell.org) is a [lazy](http://en.wikibooks.org/wiki/Haskell/Laziness), functional, statically-typed programming language with advanced type system features such as higher-rank, higher-kinded parametric [polymorphism](http://en.wikibooks.org/wiki/Haskell/Polymorphism), monadic [effects](http://en.wikibooks.org/wiki/Haskell/Understanding_monads/IO), generalized algebraic data types ([GADT](http://en.wikibooks.org/wiki/Haskell/GADT)s), flexible [type classes](http://en.wikibooks.org/wiki/Haskell/Advanced_type_classes), associated [type families](http://en.wikipedia.org/wiki/Type_family), and more.
Haskell's [`ghc`](http://www.haskell.org/ghc) is a
[portable](https://ghc.haskell.org/trac/ghc/wiki/Platforms),
[optimizing](http://benchmarksgame.alioth.debian.org/u64q/haskell.php) compiler
with a foreign-function interface
([FFI](http://en.wikibooks.org/wiki/Haskell/FFI)), an [LLVM
backend](https://www.haskell.org/ghc/docs/7.8.3/html/users_guide/code-generators.html),
and sophisticated runtime support for
[concurrency](http://en.wikibooks.org/wiki/Haskell/Concurrency),
explicit/implicit [parallelism](http://community.haskell.org/~simonmar/pcph/),
runtime [profiling](http://www.haskell.org/haskellwiki/ThreadScope), etc. Other
Haskell tools like
[`criterion`](http://www.serpentine.com/criterion/tutorial.html),
[`quickcheck`](https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing),
[`hpc`](http://www.haskell.org/haskellwiki/Haskell_program_coverage#Examples),
and [`haddock`](http://en.wikipedia.org/wiki/Haddock_(software)) provide
advanced benchmarking, property-based testing, code coverage, and documentation
generation.
Haskell's [`ghc`](http://www.haskell.org/ghc) is a [portable](https://ghc.haskell.org/trac/ghc/wiki/Platforms), [optimizing](http://benchmarksgame.alioth.debian.org/u64q/haskell.php) compiler with a foreign-function interface ([FFI](http://en.wikibooks.org/wiki/Haskell/FFI)), an [LLVM backend](https://www.haskell.org/ghc/docs/7.8.3/html/users_guide/code-generators.html), and sophisticated runtime support for [concurrency](http://en.wikibooks.org/wiki/Haskell/Concurrency), explicit/implicit [parallelism](http://community.haskell.org/~simonmar/pcph/), runtime [profiling](http://www.haskell.org/haskellwiki/ThreadScope), etc. Other Haskell tools like [`criterion`](http://www.serpentine.com/criterion/tutorial.html), [`quickcheck`](https://www.fpcomplete.com/user/pbv/an-introduction-to-quickcheck-testing), [`hpc`](http://www.haskell.org/haskellwiki/Haskell_program_coverage#Examples), and [`haddock`](http://en.wikipedia.org/wiki/Haddock_%28software%29) provide advanced benchmarking, property-based testing, code coverage, and documentation generation.
A large number of production-quality Haskell libraries are available from
[Hackage](https://hackage.haskell.org). The
[`cabal`](https://www.fpcomplete.com/user/simonmichael/how-to-cabal-install)
tool fetches packages and builds projects using the Hackage ecosystem.
A large number of production-quality Haskell libraries are available from [Hackage](https://hackage.haskell.org). The [`cabal`](https://www.fpcomplete.com/user/simonmichael/how-to-cabal-install) tool fetches packages and builds projects using the Hackage ecosystem.
%%LOGO%%
@ -40,56 +12,53 @@ tool fetches packages and builds projects using the Hackage ecosystem.
This image ships a minimal Haskell toolchain with the following packages:
* `ghc`
* `alex`
* `cabal-install`
* `happy`
- `ghc`
- `alex`
- `cabal-install`
- `happy`
## How to use this image
Start an interactive interpreter session with `ghci`:
$ docker run -it --rm haskell:7.8
GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude>
$ docker run -it --rm haskell:7.8
GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude>
Dockerize a [Hackage](http://hackage.haskell.org) app with a Dockerfile
inheriting from the base image:
Dockerize a [Hackage](http://hackage.haskell.org) app with a Dockerfile inheriting from the base image:
FROM haskell:7.8
RUN cabal update && cabal install MazesOfMonad
VOLUME /root/.MazesOfMonad
ENTRYPOINT ["/root/.cabal/bin/mazesofmonad"]
FROM haskell:7.8
RUN cabal update && cabal install MazesOfMonad
VOLUME /root/.MazesOfMonad
ENTRYPOINT ["/root/.cabal/bin/mazesofmonad"]
Iteratively develop then ship a Haskell app with a Dockerfile utilizing the
build cache:
Iteratively develop then ship a Haskell app with a Dockerfile utilizing the build cache:
FROM haskell:7.8
RUN cabal update
# Add .cabal file
ADD ./server/snap-example.cabal /opt/server/snap-example.cabal
# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies
RUN cd /opt/server && cabal install --only-dependencies -j4
# Add and Install Application Code
ADD ./server /opt/server
RUN cd /opt/server && cabal install
# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH
# Default Command for Container
WORKDIR /opt/server
CMD ["snap-example"]
FROM haskell:7.8
RUN cabal update
# Add .cabal file
ADD ./server/snap-example.cabal /opt/server/snap-example.cabal
# Docker will cache this command as a layer, freeing us up to
# modify source code without re-installing dependencies
RUN cd /opt/server && cabal install --only-dependencies -j4
# Add and Install Application Code
ADD ./server /opt/server
RUN cd /opt/server && cabal install
# Add installed cabal executables to PATH
ENV PATH /root/.cabal/bin:$PATH
# Default Command for Container
WORKDIR /opt/server
CMD ["snap-example"]
### Examples
See the application snippet above in more detail in the [example snap
application](https://github.com/darinmorrison/docker-haskell/tree/master/examples/7.8.3/snap).
See the application snippet above in more detail in the [example snap application](https://github.com/darinmorrison/docker-haskell/tree/master/examples/7.8.3/snap).

View File

@ -1,5 +1 @@
This image is licensed under the MIT License (see
[LICENSE](https://github.com/darinmorrison/docker-haskell/blob/master/LICENSE)),
and includes software licensed under the
[Glasgow Haskell Compiler License](https://www.haskell.org/ghc/license)
(BSD-style).
This image is licensed under the MIT License (see [LICENSE](https://github.com/darinmorrison/docker-haskell/blob/master/LICENSE)), and includes software licensed under the [Glasgow Haskell Compiler License](https://www.haskell.org/ghc/license) (BSD-style).

View File

@ -1,26 +1,26 @@
# Example output
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
For more examples and ideas, visit:
http://docs.docker.com/userguide/
$ docker images hello-world
REPOSITORY TAG IMAGE ID VIRTUAL SIZE
hello-world latest e45a5af57b00 910 B
$ docker run hello-world
Hello from Docker.
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
For more examples and ideas, visit:
http://docs.docker.com/userguide/
$ docker images hello-world
REPOSITORY TAG IMAGE ID VIRTUAL SIZE
hello-world latest e45a5af57b00 910 B
%%LOGO%%

View File

@ -14,7 +14,7 @@ echo
echo
echo '$ docker images hello-world'
docker images hello-world | awk -F' +' '{ print $1"\t"$2"\t"$3"\t"$5 }' | column -t -s$'\t'
} | sed 's/^/ /'
} | sed 's/^/\t/'
echo
echo '%%LOGO%%'

View File

@ -1,14 +1,5 @@
# What is Hipache?
**Hipache** (pronounced `hɪ'pætʃɪ`) is a distributed proxy designed to route
high volumes of http and websocket traffic to unusually large numbers of virtual
hosts, in a highly dynamic topology where backends are added and removed several
times per second. It is particularly well-suited for PaaS
(platform-as-a-service) and other environments that are both business-critical
and multi-tenant.
**Hipache** (pronounced `hɪ'pætʃɪ`) is a distributed proxy designed to route high volumes of http and websocket traffic to unusually large numbers of virtual hosts, in a highly dynamic topology where backends are added and removed several times per second. It is particularly well-suited for PaaS (platform-as-a-service) and other environments that are both business-critical and multi-tenant.
Hipache was originally developed at [dotCloud](http://www.dotcloud.com), a
popular platform-as-a-service, to replace its first-generation routing layer
based on a heavily instrumented nginx deployment. It currently serves production
traffic for tens of thousands of applications hosted on dotCloud. Hipache is
based on the node-http-proxy library.
Hipache was originally developed at [dotCloud](http://www.dotcloud.com), a popular platform-as-a-service, to replace its first-generation routing layer based on a heavily instrumented nginx deployment. It currently serves production traffic for tens of thousands of applications hosted on dotCloud. Hipache is based on the node-http-proxy library.

View File

@ -1,11 +1,6 @@
# What is httpd?
The Apache HTTP Server, colloquially called Apache, is a Web server application
notable for playing a key role in the initial growth of the World Wide Web.
Originally based on the NCSA HTTPd server, development of Apache began in early
1995 after work on the NCSA code stalled. Apache quickly overtook NCSA HTTPd as
the dominant HTTP server, and has remained the most popular HTTP server in use
since April 1996.
The Apache HTTP Server, colloquially called Apache, is a Web server application notable for playing a key role in the initial growth of the World Wide Web. Originally based on the NCSA HTTPd server, development of Apache began in early 1995 after work on the NCSA code stalled. Apache quickly overtook NCSA HTTPd as the dominant HTTP server, and has remained the most popular HTTP server in use since April 1996.
> [wikipedia.org/wiki/Apache_HTTP_Server](http://en.wikipedia.org/wiki/Apache_HTTP_Server)
@ -13,49 +8,33 @@ since April 1996.
# How to use this image.
This image only contains Apache httpd with the defaults from upstream. There is
no PHP installed, but it should not be hard to extend. On the other hand, of you
just want PHP with Apache httpd see the [PHP
image](https://registry.hub.docker.com/_/php/) and look at the `-apache` tags.
If you want to run a simple HTML server, add a simple Dockerfile to your project
where `public-html/` is the directory containing all your HTML.
This image only contains Apache httpd with the defaults from upstream. There is no PHP installed, but it should not be hard to extend. On the other hand, of you just want PHP with Apache httpd see the [PHP image](https://registry.hub.docker.com/_/php/) and look at the `-apache` tags. If you want to run a simple HTML server, add a simple Dockerfile to your project where `public-html/` is the directory containing all your HTML.
### Create a `Dockerfile` in your project
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
Then, run the commands to build and run the Docker image:
docker build -t my-apache2 .
docker run -it --rm --name my-running-app my-apache2
docker build -t my-apache2 .
docker run -it --rm --name my-running-app my-apache2
### Without a `Dockerfile`
If you don't want to include a `Dockerfile` in your project, it is sufficient to
do the following:
If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
### Configuration
To customize the configuration of the httpd server, just `COPY` your custom
configuration in as `/usr/local/apache2/conf/httpd.conf`.
To customize the configuration of the httpd server, just `COPY` your custom configuration in as `/usr/local/apache2/conf/httpd.conf`.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
#### SSL/HTTPS
If you want to run your web traffic over SSL, the simplest setup is to `COPY` or
mount (`-v`) your `server.crt` and `server.key` into `/usr/local/apache2/conf/`
and then customize the `/usr/local/apache2/conf/httpd.conf` by removing the
comment from the line with `#Include conf/extra/httpd-ssl.conf`. This config
file will use the certificate files previously added and tell the daemon to also
listen on port 443. Be sure to also add something like `-p 443:443` to your
`docker run` to forward the https port.
If you want to run your web traffic over SSL, the simplest setup is to `COPY` or mount (`-v`) your `server.crt` and `server.key` into `/usr/local/apache2/conf/` and then customize the `/usr/local/apache2/conf/httpd.conf` by removing the comment from the line with `#Include conf/extra/httpd-ssl.conf`. This config file will use the certificate files previously added and tell the daemon to also listen on port 443. Be sure to also add something like `-p 443:443` to your `docker run` to forward the https port.
The previous steps should work well for development, but we recommend
customizing your conf files for production, see
[httpd.apache.org](https://httpd.apache.org/docs/2.2/ssl/ssl_faq.html) for more
information about SSL setup.
The previous steps should work well for development, but we recommend customizing your conf files for production, see [httpd.apache.org](https://httpd.apache.org/docs/2.2/ssl/ssl_faq.html) for more information about SSL setup.

View File

@ -1,2 +1 @@
View [license information](https://www.apache.org/licenses/) for the software
contained in this image.
View [license information](https://www.apache.org/licenses/) for the software contained in this image.

View File

@ -1,12 +1,6 @@
# What is Hy?
Hy (a.k.a., Hylang) is a dialect of the Lisp programming language designed to
interoperate with Python by translating expressions into Python's abstract
syntax tree (AST). Similar to Clojure's mapping of s-expressions onto the JVM,
Hy is meant to operate as a transparent Lisp front end to Python's abstract
syntax. Hy also allows for Python libraries (including the standard library) to
be imported and accessed alongside Hy code with a compilation step, converting
the data structure of both into Python's AST.
Hy (a.k.a., Hylang) is a dialect of the Lisp programming language designed to interoperate with Python by translating expressions into Python's abstract syntax tree (AST). Similar to Clojure's mapping of s-expressions onto the JVM, Hy is meant to operate as a transparent Lisp front end to Python's abstract syntax. Hy also allows for Python libraries (including the standard library) to be imported and accessed alongside Hy code with a compilation step, converting the data structure of both into Python's AST.
> [wikipedia.org/wiki/Hy](https://en.wikipedia.org/wiki/Hy)
@ -16,20 +10,18 @@ the data structure of both into Python's AST.
## Create a `Dockerfile` in your Hy project
FROM hylang:0.10
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "hy", "./your-daemon-or-script.hy" ]
FROM hylang:0.10
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "hy", "./your-daemon-or-script.hy" ]
You can then build and run the Docker image:
docker build -t my-hylang-app
docker run -it --rm --name my-running-app my-hylang-app
docker build -t my-hylang-app
docker run -it --rm --name my-running-app my-hylang-app
## Run a single Hy script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a Hy script by using the Hy
Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Hy script by using the Hy Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp hylang:0.10 hy your-daemon-or-script.hy
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp hylang:0.10 hy your-daemon-or-script.hy

View File

@ -1,2 +1 @@
View [license information](https://github.com/hylang/hy/blob/master/LICENSE)
for the software contained in this image.
View [license information](https://github.com/hylang/hy/blob/master/LICENSE) for the software contained in this image.

View File

@ -14,23 +14,17 @@ This project aims to continue development of io.js under an "open governance mod
If you want to distribute your application on the docker registry, create a `Dockerfile` in the root of application directory:
```
FROM iojs:onbuild
# Expose the ports that your app uses. For example:
EXPOSE 8080
```
FROM iojs:onbuild
# Expose the ports that your app uses. For example:
EXPOSE 8080
Then simply run:
```
$ docker build -t iojs-app
...
$ docker run --rm -it iojs-app
```
$ docker build -t iojs-app
...
$ docker run --rm -it iojs-app
To run a single script, you can mount it in a volume under `/usr/src/app`. From the root of your application directory (assuming your script is named `index.js`):
```
$ docker run -v ${PWD}:/usr/src/app -w /usr/src/app --it --rm iojs iojs index.js
```
$ docker run -v ${PWD}:/usr/src/app -w /usr/src/app --it --rm iojs iojs index.js

View File

@ -1,2 +1 @@
View [license information](https://github.com/iojs/io.js/blob/master/LICENSE)
for the software contained in this image.
View [license information](https://github.com/iojs/io.js/blob/master/LICENSE) for the software contained in this image.

View File

@ -1,8 +1,6 @@
# What is irssi?
Irssi is a terminal based IRC client for UNIX systems. It also supports SILC and
ICB protocols via plugins. Some people refer to it as 'the client of the
future'.
Irssi is a terminal based IRC client for UNIX systems. It also supports SILC and ICB protocols via plugins. Some people refer to it as 'the client of the future'.
> [irssi.org](http://irssi.org)
@ -10,38 +8,30 @@ future'.
# How to use this image
Because it is unlikely any two irssi users have the same configuration
preferences, this image does not include an irssi configuration. To configure
irssi to your liking, please refer to [upstream's excellent (and comprehensive)
+documentation](http://irssi.org/documentation).
Because it is unlikely any two irssi users have the same configuration preferences, this image does not include an irssi configuration. To configure irssi to your liking, please refer to [upstream's excellent (and comprehensive) +documentation](http://irssi.org/documentation).
Be sure to also checkout the [awesome
scripts](https://github.com/irssi/scripts.irssi.org) you can download to
customize your irssi configuration.
Be sure to also checkout the [awesome scripts](https://github.com/irssi/scripts.irssi.org) you can download to customize your irssi configuration.
## Directly via bind mount
On a Linux system, build and launch a container named `my-running-irssi` like
this:
On a Linux system, build and launch a container named `my-running-irssi` like this:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
-v /etc/localtime:/etc/localtime:ro \
irssi
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
-v /etc/localtime:/etc/localtime:ro \
irssi
On a Mac OS X system, run the same image using:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
irssi
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
-v $HOME/.irssi:/home/user/.irssi:ro \
irssi
You omit `/etc/localtime` on Mac OS X because `boot2docker` doesn't use this
file.
You omit `/etc/localtime` on Mac OS X because `boot2docker` doesn't use this file.
Of course, you can name your image anything you like. In Docker 1.5 you can also
use the `--read-only` mount flag. For example, on Linux:
Of course, you can name your image anything you like. In Docker 1.5 you can also use the `--read-only` mount flag. For example, on Linux:
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
--read-only -v $HOME/.irssi:/home/user/.irssi \
-v /etc/localtime:/etc/localtime \
irssi
docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
--read-only -v $HOME/.irssi:/home/user/.irssi \
-v /etc/localtime:/etc/localtime \
irssi

View File

@ -1,2 +1 @@
View [license information](https://github.com/irssi/irssi/blob/master/COPYING) for
the software contained in this image.
View [license information](https://github.com/irssi/irssi/blob/master/COPYING) for the software contained in this image.

View File

@ -1,13 +1,10 @@
# What is Java?
Java is a concurrent, class-based, object-oriented language specifically
designed to have as few implementation dependencies as possible. It is intended
to allow application developers to "write once, run anywhere", meaning that code
that runs on one platform does not need to be recompiled to run on another.
Java is a concurrent, class-based, object-oriented language specifically designed to have as few implementation dependencies as possible. It is intended to allow application developers to "write once, run anywhere", meaning that code that runs on one platform does not need to be recompiled to run on another.
Java is a registered trademark of Oracle and/or its affiliates.
> [wikipedia.org/wiki/Java_(programming_language)](http://en.wikipedia.org/wiki/Java_(programming_language))
> [wikipedia.org/wiki/Java_(programming_language)](http://en.wikipedia.org/wiki/Java_%28programming_language%29)
%%LOGO%%
@ -15,30 +12,23 @@ Java is a registered trademark of Oracle and/or its affiliates.
## Start a Java instance in your app
The most straightforward way to use this image is to use a Java container as
both the build and runtime environment. In your `Dockerfile`, writing something
along the lines of the following will compile and run your project:
The most straightforward way to use this image is to use a Java container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
FROM java:7
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN javac Main.java
CMD ["java", "Main"]
FROM java:7
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN javac Main.java
CMD ["java", "Main"]
You can then run and build the Docker image:
docker build -t my-java-app .
docker run -it --rm --name my-running-app my-java-app
docker build -t my-java-app .
docker run -it --rm --name my-running-app my-java-app
## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a
container. To compile, but not run your app inside the Docker instance, you can
write something like:
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp java:7 javac Main.java
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp java:7 javac Main.java
This will add your current directory as a volume to the container, set the
working directory to the volume, and run the command `javac Main.java` which
will tell Java to compile the code in `Main.java` and output the Java class file
to `Main.class`.
This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `javac Main.java` which will tell Java to compile the code in `Main.java` and output the Java class file to `Main.class`.

View File

@ -1,2 +1 @@
View [license information](http://openjdk.java.net/legal/gplv2+ce.html)
for the software contained in this image.
View [license information](http://openjdk.java.net/legal/gplv2+ce.html) for the software contained in this image.

View File

@ -2,52 +2,39 @@
The Jenkins Continuous Integration and Delivery server.
This is a fully functional Jenkins server, based on the Long Term Support
release [http://jenkins-ci.org/](http://jenkins-ci.org/).
This is a fully functional Jenkins server, based on the Long Term Support release [http://jenkins-ci.org/](http://jenkins-ci.org/).
![logo](http://jenkins-ci.org/sites/default/files/jenkins_logo.png)
# How to use this image
docker run -p 8080:8080 jenkins
docker run -p 8080:8080 jenkins
This will store the workspace in /var/jenkins_home. All Jenkins data lives in
there - including plugins and configuration. You will probably want to make that
a persistent volume:
This will store the workspace in /var/jenkins_home. All Jenkins data lives in there - including plugins and configuration. You will probably want to make that a persistent volume:
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
The volume for the "myjenkins" named container will then be persistent.
You can also bind mount in a volume from the host:
First, ensure that /your/home is accessible by the jenkins user in container
(jenkins user - uid 102 normally - or use -u root), then:
First, ensure that /your/home is accessible by the jenkins user in container (jenkins user - uid 102 normally - or use -u root), then:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins
## Backing up data
If you bind mount in a volume - you can simply back up that directory (which is
jenkins_home) at any time.
If you bind mount in a volume - you can simply back up that directory (which is jenkins_home) at any time.
If your volume is inside a container - you can use `docker cp
$ID:/var/jenkins_home` command to extract the data.
## Attaching build executors
You can run builds on the master (out of the box) buf if you want to attach
build slave servers: make sure you map the port: `-p 50000:50000` - which will
be used when you connect a slave agent.
You can run builds on the master (out of the box) buf if you want to attach build slave servers: make sure you map the port: `-p 50000:50000` - which will be used when you connect a slave agent.
[Here](https://registry.hub.docker.com/u/maestrodev/build-agent/) is an example
docker container you can use as a build server with lots of good tools installed
- which is well worth trying.
[Here](https://registry.hub.docker.com/u/maestrodev/build-agent/) is an example docker container you can use as a build server with lots of good tools installed - which is well worth trying.
## Upgrading
All the data needed is in the /var/jenkins_home directory - so depending on how
you manage that - depends on how you upgrade. Generally - you can copy it out -
and then "docker pull" the image again - and you will have the latest LTS - you
can then start up with -v pointing to that data (/var/jenkins_home) and
everything will be as you left it.
All the data needed is in the /var/jenkins_home directory - so depending on how you manage that - depends on how you upgrade. Generally - you can copy it out - and then "docker pull" the image again - and you will have the latest LTS - you can then start up with -v pointing to that data (/var/jenkins_home) and everything will be as you left it.

View File

@ -1,43 +1,30 @@
# What is Jetty?
Jetty is a pure Java-based HTTP (Web) server and Java Servlet container. While
Web Servers are usually associated with serving documents to people, Jetty is
now often used for machine to machine communications, usually within larger
software frameworks. Jetty is developed as a free and open source project as
part of the Eclipse Foundation. The web server is used in products such as
Apache ActiveMQ, Alfresco, Apache Geronimo, Apache Maven, Apache
Spark, Google App Engine, Eclipse, FUSE, Twitter's Streaming API and Zimbra.
Jetty is also the server in open source projects such as Lift, Eucalyptus,
Red5, Hadoop and I2P. Jetty supports the latest Java Servlet API (with JSP
support) as well as protocols SPDY and WebSocket.
Jetty is a pure Java-based HTTP (Web) server and Java Servlet container. While Web Servers are usually associated with serving documents to people, Jetty is now often used for machine to machine communications, usually within larger software frameworks. Jetty is developed as a free and open source project as part of the Eclipse Foundation. The web server is used in products such as Apache ActiveMQ, Alfresco, Apache Geronimo, Apache Maven, Apache Spark, Google App Engine, Eclipse, FUSE, Twitter's Streaming API and Zimbra. Jetty is also the server in open source projects such as Lift, Eucalyptus, Red5, Hadoop and I2P. Jetty supports the latest Java Servlet API (with JSP support) as well as protocols SPDY and WebSocket.
> [wikipedia.org/wiki/Jetty_(web_server)](https://en.wikipedia.org/wiki/Jetty_(web_server))
> [wikipedia.org/wiki/Jetty_(web_server)](https://en.wikipedia.org/wiki/Jetty_%28web_server%29)
%%LOGO%%
Logo &copy; Eclipse Foundation
%%LOGO%% Logo &copy; Eclipse Foundation
# How to use this image.
Run the default Jetty server (`CMD ["jetty.sh", "run"]`):
docker run -d jetty:9
docker run -d jetty:9
You can test it by visiting `http://container-ip:8080` in a browser or, if you
need access outside the host, on port 8888:
You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888:
docker run -d -p 8888:8080 jetty:9
docker run -d -p 8888:8080 jetty:9
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a
browser.
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser.
The default Jetty environment in the image is:
JETTY_HOME = /usr/local/jetty
JETTY_CONF = /usr/local/jetty/etc/jetty.conf
JETTY_STATE = /usr/local/jetty/jetty.state
JETTY_ARGS =
JAVA_OPTIONS =
TMPDIR = /tmp
JETTY_HOME = /usr/local/jetty
JETTY_CONF = /usr/local/jetty/etc/jetty.conf
JETTY_STATE = /usr/local/jetty/jetty.state
JETTY_ARGS =
JAVA_OPTIONS =
TMPDIR = /tmp
Webapps can be [deployed](https://wiki.eclipse.org/Jetty/Howto/Deploy_Web_Applications)
in `/usr/local/jetty/webapps`.
Webapps can be [deployed](https://wiki.eclipse.org/Jetty/Howto/Deploy_Web_Applications) in `/usr/local/jetty/webapps`.

View File

@ -1,2 +1 @@
View [license information](http://eclipse.org/jetty/licenses.php) for the
software contained in this image.
View [license information](http://eclipse.org/jetty/licenses.php) for the software contained in this image.

View File

@ -1,20 +1,12 @@
# What is JRuby?
JRuby (http://www.jruby.org) is an implementation of Ruby
(http://www.ruby-lang.org) on the JVM.
JRuby (http://www.jruby.org) is an implementation of Ruby (http://www.ruby-lang.org) on the JVM.
Ruby is a dynamic, reflective, object-oriented, general-purpose, open-source
programming language. According to its authors, Ruby was influenced by Perl,
Smalltalk, Eiffel, Ada, and Lisp. It supports multiple programming paradigms,
including functional, object-oriented, and imperative. It also has a dynamic
type system and automatic memory management.
Ruby is a dynamic, reflective, object-oriented, general-purpose, open-source programming language. According to its authors, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, and Lisp. It supports multiple programming paradigms, including functional, object-oriented, and imperative. It also has a dynamic type system and automatic memory management.
> [wikipedia.org/wiki/Ruby_(programming_language)](https://en.wikipedia.org/wiki/Ruby_(programming_language))
> [wikipedia.org/wiki/Ruby_(programming_language)](https://en.wikipedia.org/wiki/Ruby_%28programming_language%29)
JRuby leverages the robustness and speed of the JVM while providing the same
Ruby that you already know and love.
With JRuby you are able to take advantage of real native threads, enhanced
garbage collection, and even import and use java libraries.
JRuby leverages the robustness and speed of the JVM while providing the same Ruby that you already know and love. With JRuby you are able to take advantage of real native threads, enhanced garbage collection, and even import and use java libraries.
%%LOGO%%
@ -22,32 +14,28 @@ garbage collection, and even import and use java libraries.
## Create a `Dockerfile` in your Ruby app project
FROM jruby:1.7-onbuild
CMD ["./your-daemon-or-script.rb"]
FROM jruby:1.7-onbuild
CMD ["./your-daemon-or-script.rb"]
Put this file in the root of your app, next to the `Gemfile`.
This image includes multiple `ONBUILD` triggers which should be all you need to
bootstrap most applications. The build will `COPY . /usr/src/app` and `RUN
This image includes multiple `ONBUILD` triggers which should be all you need to bootstrap most applications. The build will `COPY . /usr/src/app` and `RUN
bundle install`.
You can then build and run the Ruby image:
docker build -t my-ruby-app .
docker run -it --name my-running-script my-ruby-app
docker build -t my-ruby-app .
docker run -it --name my-running-script my-ruby-app
### Generate a `Gemfile.lock`
The `onbuid` tag expects a `Gemfile.lock` in your app directory. This `docker
run` will help you generate one. Run it in the root of your app, next to the
`Gemfile`:
run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle install --system
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle install --system
## Run a single Ruby script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a Ruby script by using the
Ruby Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp jruby:1.7 jruby your-daemon-or-script.rb
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp jruby:1.7 jruby your-daemon-or-script.rb

View File

@ -1,2 +1 @@
View [license information](https://github.com/jruby/jruby/blob/master/COPYING)
for the software contained in this image.
View [license information](https://github.com/jruby/jruby/blob/master/COPYING) for the software contained in this image.

View File

@ -1,45 +1,32 @@
# What is Mageia?
[Mageia](http://www.mageia.org) is a GNU/Linux-based, Free Software operating
system.It is a [community](https://www.mageia.org/en/community/) project,
supported by [a nonprofit
organisation](https://www.mageia.org/en/about/#mageia.org) of elected
contributors.
[Mageia](http://www.mageia.org) is a GNU/Linux-based, Free Software operating system.It is a [community](https://www.mageia.org/en/community/) project, supported by [a nonprofit organisation](https://www.mageia.org/en/about/#mageia.org) of elected contributors.
%%LOGO%%
Our mission: to build great tools for people.
Further than just delivering a secure, stable and sustainable operating system,
the goal is to set up a stable and trustable governance to direct collaborative
projects.
Further than just delivering a secure, stable and sustainable operating system, the goal is to set up a stable and trustable governance to direct collaborative projects.
To date, Mageia:
- [started in September 2010 as a
fork](https://www.mageia.org/en/about/2010-sept-announcement.html) of Mandriva
Linux;
- gathered hundreds of careful individuals and several companies worldwide,who
coproduce the infrastructure, the distribution itself,
[documentation](https://wiki.mageia.org/),
[delivery](https://www.mageia.org/en/downloads/) and
[support](https://www.mageia.org/en/support/), using Free Software tools;
- released four major stable releases in June 2011, in May 2012, in May 2013 and
in February 2014.
- [started in September 2010 as a fork](https://www.mageia.org/en/about/2010-sept-announcement.html) of Mandriva Linux;
- gathered hundreds of careful individuals and several companies worldwide,who coproduce the infrastructure, the distribution itself, [documentation](https://wiki.mageia.org/), [delivery](https://www.mageia.org/en/downloads/) and [support](https://www.mageia.org/en/support/), using Free Software tools;
- released four major stable releases in June 2011, in May 2012, in May 2013 and in February 2014.
# How to use this image
## Create a Dockerfile for your container
FROM mageia:4
MAINTAINER "Foo Bar" <foo@bar.com>
CMD [ "bash" ]
FROM mageia:4
MAINTAINER "Foo Bar" <foo@bar.com>
CMD [ "bash" ]
## Installed packages
All images install the following packages:
* basesystem-minimal
* urpmi
* locales
* locales-en
- basesystem-minimal
- urpmi
- locales
- locales-en

View File

@ -1,18 +1,8 @@
# What is MariaDB?
MariaDB is a community-developed fork of the MySQL relational database
management system intended to remain free under the GNU GPL. Being a fork of a
leading open source software system, it is notable for being led by the original
developers of MySQL, who forked it due to concerns over its acquisition by
Oracle. Contributors are required to share their copyright with the MariaDB
Foundation.
MariaDB is a community-developed fork of the MySQL relational database management system intended to remain free under the GNU GPL. Being a fork of a leading open source software system, it is notable for being led by the original developers of MySQL, who forked it due to concerns over its acquisition by Oracle. Contributors are required to share their copyright with the MariaDB Foundation.
The intent is also to maintain high compatibility with MySQL, ensuring a
"drop-in" replacement capability with library binary equivalency and exact
matching with MySQL APIs and commands. It includes the XtraDB storage engine for
replacing InnoDB, as well as a new storage engine, Aria, that intends to be both
a transactional and non-transactional engine perhaps even included in future
versions of MySQL.
The intent is also to maintain high compatibility with MySQL, ensuring a "drop-in" replacement capability with library binary equivalency and exact matching with MySQL APIs and commands. It includes the XtraDB storage engine for replacing InnoDB, as well as a new storage engine, Aria, that intends to be both a transactional and non-transactional engine perhaps even included in future versions of MySQL.
> [wikipedia.org/wiki/MariaDB](https://en.wikipedia.org/wiki/MariaDB)
@ -22,55 +12,36 @@ versions of MySQL.
## start a mariadb instance
docker run --name some-mariadb -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mariadb
docker run --name some-mariadb -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mariadb
This image includes `EXPOSE 3306` (the mysql/mariadb port), so standard
container linking will make it automatically available to the linked containers
(as the following examples illustrate).
This image includes `EXPOSE 3306` (the mysql/mariadb port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## connect to it from an application
Since MariaDB is intended as a drop-in replacement for MySQL, it can be used
with many applications.
Since MariaDB is intended as a drop-in replacement for MySQL, it can be used with many applications.
docker run --name some-app --link some-mariadb:mysql -d application-that-uses-mysql
docker run --name some-app --link some-mariadb:mysql -d application-that-uses-mysql
## ... or via `mysql`
docker run -it --link some-mariadb:mysql --rm mariadb sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
docker run -it --link some-mariadb:mysql --rm mariadb sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
## Environment Variables
The MariaDB image uses several environment variables which are easy to miss. While
not all the variables are required, they may significantly aid you in using the
image. The variables use "MYSQL" since the MariaDB binary is `mysqld`.
The MariaDB image uses several environment variables which are easy to miss. While not all the variables are required, they may significantly aid you in using the image. The variables use "MYSQL" since the MariaDB binary is `mysqld`.
### `MYSQL_ROOT_PASSWORD`
This is the one environment variable that is required for you to use the MariaDB
image. This environment variable should be what you want to set the root
password for MariaDB to. In the above example, it is being set to
"mysecretpassword".
This is the one environment variable that is required for you to use the MariaDB image. This environment variable should be what you want to set the root password for MariaDB to. In the above example, it is being set to "mysecretpassword".
### `MYSQL_USER`, `MYSQL_PASSWORD`
These optional environment variables are used in conjunction to set both a MariaDB
user and password, which will subsequently be granted all permissions for the
database specified by the optional `MYSQL_DATABASE` variable. Note that if you
only have one of these two environment variables, then neither will actually do
anything - these two are meant to be used in conjunction with one another.
These optional environment variables are used in conjunction to set both a MariaDB user and password, which will subsequently be granted all permissions for the database specified by the optional `MYSQL_DATABASE` variable. Note that if you only have one of these two environment variables, then neither will actually do anything - these two are meant to be used in conjunction with one another.
### `MYSQL_DATABASE`
This optional environment variable denotes the name of a database to create. If
a user/password was supplied (via the `MYSQL_USER` and `MYSQL_PASSWORD`
environment variables) then that user account will be granted (`GRANT ALL`)
access to this database.
This optional environment variable denotes the name of a database to create. If a user/password was supplied (via the `MYSQL_USER` and `MYSQL_PASSWORD` environment variables) then that user account will be granted (`GRANT ALL`) access to this database.
# Caveats
If there is no database when `mariadb` starts in a container, then `mariadb` will
create the default database for you. While this is the expected behavior of
`mariadb`, this means that it will not accept incoming connections during that
time. This may cause issues when using automation tools, such as `fig`, that
start several containers simultaneously.
If there is no database when `mariadb` starts in a container, then `mariadb` will create the default database for you. While this is the expected behavior of `mariadb`, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as `fig`, that start several containers simultaneously.

View File

@ -1,10 +1,6 @@
# What is Maven?
[Apache Maven](http://maven.apache.org) is a software project management and
comprehension tool.
Based on the concept of a project object model (POM),
Maven can manage a project's build,
reporting and documentation from a central piece of information.
[Apache Maven](http://maven.apache.org) is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
%%LOGO%%
@ -12,26 +8,20 @@ reporting and documentation from a central piece of information.
## Create a Dockerfile in your Maven project
FROM maven:3.2-jdk-7-onbuild
CMD ["do-something-with-built-packages"]
FROM maven:3.2-jdk-7-onbuild
CMD ["do-something-with-built-packages"]
Put this file in the root of your project, next to the pom.xml.
This image includes multiple ONBUILD triggers which should be all you need to
bootstrap.
The build will `COPY . /usr/src/app` and `RUN mvn install`.
This image includes multiple ONBUILD triggers which should be all you need to bootstrap. The build will `COPY . /usr/src/app` and `RUN mvn install`.
You can then build and run the image:
docker build -t my-maven .
docker run -it --name my-maven-script my-maven
docker build -t my-maven .
docker run -it --name my-maven-script my-maven
## Run a single Maven command
For many simple projects, you may find it inconvenient to write a complete
`Dockerfile`.
In such cases, you can run a Maven project by using the Maven Docker image
directly, passing a Maven command to `docker run`:
For many simple projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Maven project by using the Maven Docker image directly, passing a Maven command to `docker run`:
docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven maven:3.2-jdk-7 mvn clean install
docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven maven:3.2-jdk-7 mvn clean install

View File

@ -1,2 +1 @@
View [license information](https://www.apache.org/licenses/) for the software
contained in this image.
View [license information](https://www.apache.org/licenses/) for the software contained in this image.

View File

@ -1,29 +1,19 @@
# What is Memcached?
Memcached is a general-purpose distributed memory caching system. It is often
used to speed up dynamic database-driven websites by caching data and objects in
RAM to reduce the number of times an external data source (such as a database or
API) must be read.
Memcached is a general-purpose distributed memory caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read.
Memcached's APIs provide a very large hash table distributed across multiple
machines. When the table is full, subsequent inserts cause older data to be
purged in least recently used order. Applications using Memcached typically
layer requests and additions into RAM before falling back on a slower backing
store, such as a database.
Memcached's APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in least recently used order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.
> [wikipedia.org/wiki/Memcached](https://en.wikipedia.org/wiki/Memcached)
# How to use this image
docker run --name my-memcache -d memcached
docker run --name my-memcache -d memcached
Start your memcached container with the above command and then you can connect
you app to it with standard linking:
Start your memcached container with the above command and then you can connect you app to it with standard linking:
docker run --link my-memcache:memcache -d my-app-image
docker run --link my-memcache:memcache -d my-app-image
The memcached server information would then be available through the ENV
variables generated by the link as well as through DNS as `memcache` from
`/etc/hosts`.
The memcached server information would then be available through the ENV variables generated by the link as well as through DNS as `memcache` from `/etc/hosts`.
For infomation on configuring your memcached server, see the extensive [wiki](https://code.google.com/p/memcached/wiki/NewStart).

View File

@ -1,3 +1 @@
View [license
information](https://github.com/memcached/memcached/blob/master/LICENSE) for the
software contained in this image.
View [license information](https://github.com/memcached/memcached/blob/master/LICENSE) for the software contained in this image.

View File

@ -1,20 +1,8 @@
# What is MongoDB?
MongoDB (from "humongous") is a cross-platform document-oriented database.
Classified as a NoSQL database, MongoDB eschews the traditional table-based
relational database structure in favor of JSON-like documents with dynamic
schemas (MongoDB calls the format BSON), making the integration of data in
certain types of applications easier and faster. Released under a combination of
the GNU Affero General Public License and the Apache License, MongoDB is free
and open-source software.
MongoDB (from "humongous") is a cross-platform document-oriented database. Classified as a NoSQL database, MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas (MongoDB calls the format BSON), making the integration of data in certain types of applications easier and faster. Released under a combination of the GNU Affero General Public License and the Apache License, MongoDB is free and open-source software.
First developed by the software company 10gen (now MongoDB Inc.) in October 2007
as a component of a planned platform as a service product, the company shifted
to an open source development model in 2009, with 10gen offering commercial
support and other services. Since then, MongoDB has been adopted as backend
software by a number of major websites and services, including Craigslist, eBay,
Foursquare, SourceForge, Viacom, and the New York Times, among others. MongoDB
is the most popular NoSQL database system.
First developed by the software company 10gen (now MongoDB Inc.) in October 2007 as a component of a planned platform as a service product, the company shifted to an open source development model in 2009, with 10gen offering commercial support and other services. Since then, MongoDB has been adopted as backend software by a number of major websites and services, including Craigslist, eBay, Foursquare, SourceForge, Viacom, and the New York Times, among others. MongoDB is the most popular NoSQL database system.
> [wikipedia.org/wiki/MongoDB](https://en.wikipedia.org/wiki/MongoDB)
@ -24,21 +12,18 @@ is the most popular NoSQL database system.
## start a mongo instance
docker run --name some-mongo -d mongo
docker run --name some-mongo -d mongo
This image includes `EXPOSE 27017` (the mongo port), so standard container
linking will make it automatically available to the linked containers (as the
following examples illustrate).
This image includes `EXPOSE 27017` (the mongo port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## connect to it from an application
docker run --name some-app --link some-mongo:mongo -d application-that-uses-mongo
docker run --name some-app --link some-mongo:mongo -d application-that-uses-mongo
## ... or via `mongo`
docker run -it --link some-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
docker run -it --link some-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
## Configuration
See the [official docs](http://docs.mongodb.org/manual/) for infomation on using
and configuring MongoDB for things like replica sets and sharding.
See the [official docs](http://docs.mongodb.org/manual/) for infomation on using and configuring MongoDB for things like replica sets and sharding.

View File

@ -1,3 +1 @@
View [license
information](https://github.com/mongodb/mongo/blob/7c3cfac300cfcca4f73f1c3b18457f0f8fae3f69/README#L71)
for the software contained in this image.
View [license information](https://github.com/mongodb/mongo/blob/7c3cfac300cfcca4f73f1c3b18457f0f8fae3f69/README#L71) for the software contained in this image.

View File

@ -1,13 +1,9 @@
# What is Mono
Sponsored by Xamarin, Mono is an open source implementation of Microsoft's .NET
Framework based on the ECMA standards for C# and the Common Language Runtime. A
growing family of solutions and an active and enthusiastic contributing
community is helping position Mono to become the leading choice for development
of cross platform applications.
Sponsored by Xamarin, Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of cross platform applications.
* [Mono Project homepage](http://www.mono-project.com/)
* [http://en.wikipedia.org/wiki/Mono_(software)](http://en.wikipedia.org/wiki/Mono_(software))
- [Mono Project homepage](http://www.mono-project.com/)
- [http://en.wikipedia.org/wiki/Mono_(software)](http://en.wikipedia.org/wiki/Mono_%28software%29)
%%LOGO%%
@ -19,21 +15,17 @@ This image will run stand-alone Mono console apps.
This example Dockerfile will run an executable called `TestingConsoleApp.exe`.
FROM mono:3.10-onbuild
CMD [ "mono", "./TestingConsoleApp.exe" ]
FROM mono:3.10-onbuild
CMD [ "mono", "./TestingConsoleApp.exe" ]
Place this file in the root of your app, next to the `.sln` solution file.
Modify the exectuable name to match what you want to run.
Place this file in the root of your app, next to the `.sln` solution file. Modify the exectuable name to match what you want to run.
This image includes `ONBUILD` triggers that adds your app source code to
`/usr/src/app/source`, restores NuGet packages and compiles the app, placing the
output in `/usr/src/app/build`.
This image includes `ONBUILD` triggers that adds your app source code to `/usr/src/app/source`, restores NuGet packages and compiles the app, placing the output in `/usr/src/app/build`.
With the Dockerfile in place, you can build and run a Docker image with your
app:
With the Dockerfile in place, you can build and run a Docker image with your app:
docker build -t my-app .
docker run my-app
docker build -t my-app .
docker run my-app
You should see any output from your app now.

View File

@ -1,3 +1 @@
This Docker Image is licensed with the Expat License. See the [Mono Project
licensing FAQ](http://www.mono-project.com/docs/faq/licensing/) for details on
how Mono and associated libraries are licensed.
This Docker Image is licensed with the Expat License. See the [Mono Project licensing FAQ](http://www.mono-project.com/docs/faq/licensing/) for details on how Mono and associated libraries are licensed.

View File

@ -1,15 +1,8 @@
# What is MySQL?
MySQL is (as of March 2014) the world's second most widely used open-source
relational database management system (RDBMS). It is named after co-founder
Michael Widenius's daughter, My. The SQL phrase stands for Structured Query
Language.
MySQL is (as of March 2014) the world's second most widely used open-source relational database management system (RDBMS). It is named after co-founder Michael Widenius's daughter, My. The SQL phrase stands for Structured Query Language.
MySQL is a popular choice of database for use in web applications, and is a
central component of the widely used LAMP open source web application software
stack (and other 'AMP' stacks). LAMP is an acronym for "Linux, Apache, MySQL,
Perl/PHP/Python." Free-software-open source projects that require a
full-featured database management system often use MySQL.
MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack (and other 'AMP' stacks). LAMP is an acronym for "Linux, Apache, MySQL, Perl/PHP/Python." Free-software-open source projects that require a full-featured database management system often use MySQL.
Oracle Corporation and/or affiliates own the copyright and trademark for MySQL.
@ -21,56 +14,34 @@ Oracle Corporation and/or affiliates own the copyright and trademark for MySQL.
## start a mysql instance
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
This image includes `EXPOSE 3306` (the mysql port), so standard container
linking will make it automatically available to the linked containers (as the
following examples illustrate).
This image includes `EXPOSE 3306` (the mysql port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## connect to it from an application
docker run --name some-app --link some-mysql:mysql -d application-that-uses-mysql
docker run --name some-app --link some-mysql:mysql -d application-that-uses-mysql
## ... or via `mysql`
docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
docker run -it --link some-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
## Environment Variables
The MySQL image uses several environment variables which are easy to miss. While
not all the variables are required, they may significantly aid you in using the
image.
The MySQL image uses several environment variables which are easy to miss. While not all the variables are required, they may significantly aid you in using the image.
### `MYSQL_ROOT_PASSWORD`
This is the one environment variable that is required for you to use the MySQL
image. This environment variable should be what you want to set the root
password for MySQL to. In the above example, it is being set to
"mysecretpassword".
This is the one environment variable that is required for you to use the MySQL image. This environment variable should be what you want to set the root password for MySQL to. In the above example, it is being set to "mysecretpassword".
### `MYSQL_USER`, `MYSQL_PASSWORD`
These optional environment variables are used in conjunction to set both a MySQL
user and password, which will subsequently be granted all permissions for the
database specified by the optional `MYSQL_DATABASE` variable. Note that if you
only have one of these two environment variables, then neither will actually do
anything - these two are meant to be used in conjunction with one another. When
these variables are used, it will create a new user with the given password in
the MySQL database - there is no need to specify `MYSQL_USER` with `root`, as
the `root` user already exists in the default MySQL and the password is
controlled by `MYSQL_ROOT_PASSWORD`.
These optional environment variables are used in conjunction to set both a MySQL user and password, which will subsequently be granted all permissions for the database specified by the optional `MYSQL_DATABASE` variable. Note that if you only have one of these two environment variables, then neither will actually do anything - these two are meant to be used in conjunction with one another. When these variables are used, it will create a new user with the given password in the MySQL database - there is no need to specify `MYSQL_USER` with `root`, as the `root` user already exists in the default MySQL and the password is controlled by `MYSQL_ROOT_PASSWORD`.
### `MYSQL_DATABASE`
This optional environment variable denotes the name of a database to create. If
a user/password was supplied (via the `MYSQL_USER` and `MYSQL_PASSWORD`
environment variables) then that user account will be granted (`GRANT ALL`)
access to this database.
This optional environment variable denotes the name of a database to create. If a user/password was supplied (via the `MYSQL_USER` and `MYSQL_PASSWORD` environment variables) then that user account will be granted (`GRANT ALL`) access to this database.
# Caveats
If there is no database when `mysql` starts in a container, then `mysql` will
create the default database for you. While this is the expected behavior of
`mysql`, this means that it will not accept incoming connections during that
time. This may cause issues when using automation tools, such as `fig`, that
start several containers simultaneously.
If there is no database when `mysql` starts in a container, then `mysql` will create the default database for you. While this is the expected behavior of `mysql`, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as `fig`, that start several containers simultaneously.

View File

@ -1,11 +1,6 @@
# What is NeuroDebian?
NeuroDebian provides a large collection of popular neuroscience research
software for the [Debian](http://www.debian.org) operating system as well as
[Ubuntu](http://www.ubuntu.com) and other derivatives. Popular packages include
*AFNI*, *FSL*, *PyMVPA*, and many others. While we do strive to maintain a high
level of quality, we make no guarantee that a given package works as expected,
so use them at your own risk.
NeuroDebian provides a large collection of popular neuroscience research software for the [Debian](http://www.debian.org) operating system as well as [Ubuntu](http://www.ubuntu.com) and other derivatives. Popular packages include*AFNI*, *FSL*, *PyMVPA*, and many others. While we do strive to maintain a high level of quality, we make no guarantee that a given package works as expected, so use them at your own risk.
> [neuro.debian.net](http://neuro.debian.net/)
@ -13,24 +8,17 @@ so use them at your own risk.
# About this image
NeuroDebian images only add NeuroDebian repository and repository's GPG key. No
apt indexes are downloaded, so `apt-get update` needs to be ran before any use
of `apt-get`.
NeuroDebian images only add NeuroDebian repository and repository's GPG key. No apt indexes are downloaded, so `apt-get update` needs to be ran before any use of `apt-get`.
`nd` tags are used to reflect suffixes used in versions of packages available
from NeuroDebian.
`nd` tags are used to reflect suffixes used in versions of packages available from NeuroDebian.
The `neurodebian:latest` tag will always point the Neurodebian-enabled latest
stable release of Debian (which is, at the time of this writing,
`debian:wheezy`).
The `neurodebian:latest` tag will always point the Neurodebian-enabled latest stable release of Debian (which is, at the time of this writing, `debian:wheezy`).
## sources.list
NeuroDebian APT file is installed under
`/etc/apt/sources.list.d/neurodebian.sources.list` and currently enables only
`main` (DFSG-compliant) area of the archive:
NeuroDebian APT file is installed under `/etc/apt/sources.list.d/neurodebian.sources.list` and currently enables only `main` (DFSG-compliant) area of the archive:
> docker run neurodebian:latest cat /etc/apt/sources.list.d/neurodebian.sources.list
deb http://neuro.debian.net/debian wheezy main
deb http://neuro.debian.net/debian data main
#deb-src http://neuro.debian.net/debian-devel wheezy main
> docker run neurodebian:latest cat /etc/apt/sources.list.d/neurodebian.sources.list
deb http://neuro.debian.net/debian wheezy main
deb http://neuro.debian.net/debian data main
#deb-src http://neuro.debian.net/debian-devel wheezy main

View File

@ -1,12 +1,6 @@
# What is Nginx?
Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP,
HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache,
and a web server (origin server). The nginx project started with a strong focus
on high concurrency, high performance and low memory usage. It is licensed under
the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X,
Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of
concept port for Microsoft Window..
Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Window..
> [wikipedia.org/wiki/Nginx](https://en.wikipedia.org/wiki/Nginx)
@ -16,51 +10,40 @@ concept port for Microsoft Window..
## hosting some simple static content
docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
Alternatively, a simple `Dockerfile` can be used to generate a new image that
includes the necessary content (which is a much cleaner solution than the bind
mount above):
Alternatively, a simple `Dockerfile` can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above):
FROM nginx
COPY static-html-directory /usr/share/nginx/html
FROM nginx
COPY static-html-directory /usr/share/nginx/html
Place this file in the same directory as your directory of content
("static-html-directory"), run `docker build -t some-content-nginx .`, then
start your container:
Place this file in the same directory as your directory of content ("static-html-directory"), run `docker build -t some-content-nginx .`, then start your container:
docker run --name some-nginx -d some-content-nginx
docker run --name some-nginx -d some-content-nginx
## exposing the port
docker run --name some-nginx -d -p 8080:80 some-content-nginx
docker run --name some-nginx -d -p 8080:80 some-content-nginx
Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your
browser.
Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browser.
## complex configuration
docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
docker run --name some-nginx -v /some/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
For information on the syntax of the Nginx configuration files, see [the
official documentation](http://nginx.org/en/docs/) (specifically the [Beginner's
Guide](http://nginx.org/en/docs/beginners_guide.html#conf_structure)).
For information on the syntax of the Nginx configuration files, see [the official documentation](http://nginx.org/en/docs/) (specifically the [Beginner's Guide](http://nginx.org/en/docs/beginners_guide.html#conf_structure)).
Be sure to include `daemon off;` in your custom configuration to ensure that
Nginx stays in the foreground so that Docker can track the process properly
(otherwise your container will stop immediately after starting)!
Be sure to include `daemon off;` in your custom configuration to ensure that Nginx stays in the foreground so that Docker can track the process properly (otherwise your container will stop immediately after starting)!
If you wish to adapt the default configuration, use something like the following
to copy it from a running Nginx container:
If you wish to adapt the default configuration, use something like the following to copy it from a running Nginx container:
docker cp some-nginx:/etc/nginx/nginx.conf /some/nginx.conf
docker cp some-nginx:/etc/nginx/nginx.conf /some/nginx.conf
As above, this can also be accomplished more cleanly using a simple
`Dockerfile`:
As above, this can also be accomplished more cleanly using a simple `Dockerfile`:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
Then, build with `docker build -t some-custom-nginx .` and run:
docker run --name some-nginx -d some-custom-nginx
docker run --name some-nginx -d some-custom-nginx

View File

@ -1,20 +1,10 @@
# What is Node.js?
Node.js is a software platform for scalable server-side and networking
applications. Node.js applications are written in JavaScript and can be run
within the Node.js runtime on Mac OS X, Windows, and Linux without changes.
Node.js is a software platform for scalable server-side and networking applications. Node.js applications are written in JavaScript and can be run within the Node.js runtime on Mac OS X, Windows, and Linux without changes.
Node.js applications are designed to maximize throughput and efficiency, using
non-blocking I/O and asynchronous events. Node.js applications run
single-threaded, although Node.js uses multiple threads for file and network
events. Node.js is commonly used for real-time applications due to its
asynchronous nature.
Node.js applications are designed to maximize throughput and efficiency, using non-blocking I/O and asynchronous events. Node.js applications run single-threaded, although Node.js uses multiple threads for file and network events. Node.js is commonly used for real-time applications due to its asynchronous nature.
Node.js internally uses the Google V8 JavaScript engine to execute code; a large
percentage of the basic modules are written in JavaScript. Node.js contains a
built-in, asynchronous I/O library for file, socket, and HTTP communication. The
HTTP and socket support allows Node.js to act as a web server without additional
software such as Apache.
Node.js internally uses the Google V8 JavaScript engine to execute code; a large percentage of the basic modules are written in JavaScript. Node.js contains a built-in, asynchronous I/O library for file, socket, and HTTP communication. The HTTP and socket support allows Node.js to act as a web server without additional software such as Apache.
> [wikipedia.org/wiki/Node.js](https://en.wikipedia.org/wiki/Node.js)
@ -24,26 +14,21 @@ software such as Apache.
## Create a `Dockerfile` in your Node.js app project
FROM node:0.10-onbuild
# replace this with your application's default port
EXPOSE 8888
FROM node:0.10-onbuild
# replace this with your application's default port
EXPOSE 8888
You can then build and run the Docker image:
docker build -t my-nodejs-app .
docker run -it --rm --name my-running-app my-nodejs-app
docker build -t my-nodejs-app .
docker run -it --rm --name my-running-app my-nodejs-app
### Notes
The image assumes that your application has a file named
[`package.json`](https://docs.npmjs.com/files/package.json) listing its dependencies
and defining its [start
script](https://docs.npmjs.com/misc/scripts#default-values).
The image assumes that your application has a file named [`package.json`](https://docs.npmjs.com/files/package.json) listing its dependencies and defining its [start script](https://docs.npmjs.com/misc/scripts#default-values).
## Run a single Node.js script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a Node.js script by using the
Node.js Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Node.js script by using the Node.js Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp node:0.10 node your-daemon-or-script.js
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp node:0.10 node your-daemon-or-script.js

View File

@ -1,2 +1 @@
View [license information](https://github.com/joyent/node/blob/master/LICENSE)
for the software contained in this image.
View [license information](https://github.com/joyent/node/blob/master/LICENSE) for the software contained in this image.

View File

@ -1,13 +1,6 @@
# What is Odoo?
Odoo, formerly known as OpenERP, is a suite of open-source business apps
written in Python and released under the AGPL license. This suite of
applications covers all business needs, from Website/Ecommerce down to
manufacturing, inventory and accounting, all seamlessly integrated. It is the
first time ever a software editor managed to reach such a functional coverage.
Odoo is the most installed business software in the world. Odoo is used by
2.000.000 users worldwide ranging from very small companies (1 user) to very
large ones (300 000 users).
Odoo, formerly known as OpenERP, is a suite of open-source business apps written in Python and released under the AGPL license. This suite of applications covers all business needs, from Website/Ecommerce down to manufacturing, inventory and accounting, all seamlessly integrated. It is the first time ever a software editor managed to reach such a functional coverage. Odoo is the most installed business software in the world. Odoo is used by 2.000.000 users worldwide ranging from very small companies (1 user) to very large ones (300 000 users).
> [www.odoo.com](https://www.odoo.com)
@ -19,56 +12,44 @@ This image requires a running PostgreSQL server.
## Start a PostgreSQL server
docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres
docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres
## Start an Odoo instance
docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
docker run -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
The alias of the container running Postgres must be db for Odoo to be able
to connect to the Postgres server.
The alias of the container running Postgres must be db for Odoo to be able to connect to the Postgres server.
## Stop and restart an Odoo instance
docker stop odoo
docker start -a odoo
docker stop odoo
docker start -a odoo
## Stop and restart a PostgreSQL server
When a PostgreSQL server is restarted, the Odoo instances
linked to that server must be restarted as well because the server address has
changed and the link is thus broken.
When a PostgreSQL server is restarted, the Odoo instances linked to that server must be restarted as well because the server address has changed and the link is thus broken.
Restarting a PostgreSQL server does not affect the created databases.
## Run Odoo with a custom configuration
The default configuration file for the server (located at `/etc/odoo/openerp-server.conf`)
can be overriden at startup using volumes. Suppose you have a custom configuration
at `/path/to/config/openerp-server.conf`, then
The default configuration file for the server (located at `/etc/odoo/openerp-server.conf`) can be overriden at startup using volumes. Suppose you have a custom configuration at `/path/to/config/openerp-server.conf`, then
docker run -v /path/to/config:/etc/odoo -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
docker run -v /path/to/config:/etc/odoo -p 127.0.0.1:8069:8069 --name odoo --link db:db -t odoo
## Run multiple Odoo instances
docker run -p 127.0.0.1:8070:8069 --name odoo2 --link db:db -t odoo
docker run -p 127.0.0.1:8071:8069 --name odoo3 --link db:db -t odoo
docker run -p 127.0.0.1:8070:8069 --name odoo2 --link db:db -t odoo
docker run -p 127.0.0.1:8071:8069 --name odoo3 --link db:db -t odoo
Please note that for plain use of mails and reports functionalities, when the
host and container ports differ (e.g. 8070 and 8069), one has to set,
in Odoo, Settings->Parameters->System Parameters (requires technical features),
web.base.url to the container port (e.g. 127.0.0.1:8069).
Please note that for plain use of mails and reports functionalities, when the host and container ports differ (e.g. 8070 and 8069), one has to set, in Odoo, Settings->Parameters->System Parameters (requires technical features), web.base.url to the container port (e.g. 127.0.0.1:8069).
# How to upgrade this image
Suppose you created a database from an Odoo instance named old-odoo, and you
want to access this database from a new Odoo instance named new-odoo, e.g.
because you've just downloaded a newer Odoo image.
By default, Odoo 8.0 uses a filestore (located at /var/lib/odoo/.local/share/Odoo/filestore/)
for attachments. You should restore this filestore in your new Odoo instance by
running
Suppose you created a database from an Odoo instance named old-odoo, and you want to access this database from a new Odoo instance named new-odoo, e.g. because you've just downloaded a newer Odoo image.
docker run --volumes-from old-odoo -p 127.0.0.1:8070:8069 --name new-odoo --link db:db -t odoo
By default, Odoo 8.0 uses a filestore (located at /var/lib/odoo/.local/share/Odoo/filestore/) for attachments. You should restore this filestore in your new Odoo instance by running
You can also simply prevent Odoo from using the filestore by setting the system
parameter `ir_attachment.location` to `db-storage` in Settings->Parameters->System
Parameters (requires technical features).
docker run --volumes-from old-odoo -p 127.0.0.1:8070:8069 --name new-odoo --link db:db -t odoo
You can also simply prevent Odoo from using the filestore by setting the system parameter `ir_attachment.location` to `db-storage` in Settings->Parameters->System Parameters (requires technical features).

View File

@ -1,2 +1 @@
View [license information](https://raw.githubusercontent.com/odoo/odoo/8.0/LICENSE)
for the software contained in this image.
View [license information](https://raw.githubusercontent.com/odoo/odoo/8.0/LICENSE) for the software contained in this image.

View File

@ -4,15 +4,11 @@ This project contains the stable releases of the openSUSE distribution.
# Naming conventions
Each image is tagged using both the release number (eg *"13.1"*) and the code
name (eg *"Bottle"*). The latest stable release is always available using the
"*latest*" tag.
Each image is tagged using both the release number (eg *"13.1"*) and the code name (eg *"Bottle"*). The latest stable release is always available using the "*latest*" tag.
# Building
These images are generated using [KIWI](https://github.com/openSUSE/kiwi). Their
source file can be found on [this
repository](https://github.com/openSUSE/docker-containers).
These images are generated using [KIWI](https://github.com/openSUSE/kiwi). Their source file can be found on [this repository](https://github.com/openSUSE/docker-containers).
# Repositories and packages
@ -20,7 +16,7 @@ The package selection is kept minimal to reduce the footprint of the image.
However the following repositories are already part of the image:
* OSS
* OSS Updates
* Non-OSS
* Non-OSS Updates
- OSS
- OSS Updates
- Non-OSS
- Non-OSS Updates

View File

@ -2,28 +2,21 @@
%%LOGO%%
Oracle Linux is an open-source operating system available under the GNU General
Public License (GPLv2). Suitable for general purpose or Oracle workloads, it
benefits from rigorous testing of more than 128,000 hours per day with real-
world workloads and includes unique innovations such as Ksplice for zero-
downtime kernel patching, DTrace for real-time diagnostics, the powerful Btrfs
file system, and more.
Oracle Linux is an open-source operating system available under the GNU General Public License (GPLv2). Suitable for general purpose or Oracle workloads, it benefits from rigorous testing of more than 128,000 hours per day with real- world workloads and includes unique innovations such as Ksplice for zero- downtime kernel patching, DTrace for real-time diagnostics, the powerful Btrfs file system, and more.
## How to use these images
The Oracle Linux images are intended for use in the **FROM** field of an
application's `Dockerfile`. For example, to use Oracle Linux 6 as the
base of an image, specify `FROM oraclelinux:6`.
The Oracle Linux images are intended for use in the **FROM** field of an application's `Dockerfile`. For example, to use Oracle Linux 6 as the base of an image, specify `FROM oraclelinux:6`.
## Official Resources
* [Learn more about Oracle Linux] (http://oracle.com/linux)
* [Unbreakable Linux Network] (https://linux.oracle.com)
* [Oracle Public Yum] (http://public-yum.oracle.com)
- [Learn more about Oracle Linux](http://oracle.com/linux)
- [Unbreakable Linux Network](https://linux.oracle.com)
- [Oracle Public Yum](http://public-yum.oracle.com)
## Social media resources
* [Twitter] (https://twitter.com/ORCL_Linux)
* [Facebook] (https://www.facebook.com/OracleLinux)
* [YouTube] (https://www.youtube.com/user/OracleLinuxChannel)
* [Blog] (http://blogs.oracle.com/linux)
- [Twitter](https://twitter.com/ORCL_Linux)
- [Facebook](https://www.facebook.com/OracleLinux)
- [YouTube](https://www.youtube.com/user/OracleLinuxChannel)
- [Blog](http://blogs.oracle.com/linux)

View File

@ -1,2 +1 @@
View the [Oracle Linux End-User License Agreement](https://oss.oracle.com/ol6/EULA)
for the software contained in this image.
View the [Oracle Linux End-User License Agreement](https://oss.oracle.com/ol6/EULA) for the software contained in this image.

View File

@ -1,25 +1,16 @@
## Customer Support
Oracle provides support to Oracle Linux subscription customers via the
[My Oracle Support] (https://support.oracle.com) portal. The Oracle Linux
Docker images are covered by Oracle Linux Basic and Premier support
subscriptions. Customers should follow existing support procedures to obtain
support for Oracle Linux running in a Docker container.
Oracle provides support to Oracle Linux subscription customers via the [My Oracle Support](https://support.oracle.com) portal. The Oracle Linux Docker images are covered by Oracle Linux Basic and Premier support subscriptions. Customers should follow existing support procedures to obtain support for Oracle Linux running in a Docker container.
## Community Support
For Oracle Linux users without a paid support subscription, the following resources
are available:
For Oracle Linux users without a paid support subscription, the following resources are available:
* The [Oracle Linux Forum] (https://community.oracle.com/community/server_%26_storage_systems/linux/oracle_linux) on the [Oracle Technology Network Community] (https://community.oracle.com/welcome).
* The `#oraclelinux` IRC channel on Freenode.
- The [Oracle Linux Forum](https://community.oracle.com/community/server_%26_storage_systems/linux/oracle_linux) on the [Oracle Technology Network Community](https://community.oracle.com/welcome).
- The `#oraclelinux` IRC channel on Freenode.
## Contributing
Oracle Linux customers with an active support subscription should follow the
existing support procedures to suggest new features, fixes or updates to the
Oracle Linux Docker images.
Oracle Linux customers with an active support subscription should follow the existing support procedures to suggest new features, fixes or updates to the Oracle Linux Docker images.
For Oracle Linux users without a paid support subscription, please submit
any new feature, fix or update suggestion via a
[GitHub issue](%%GITHUB-REPO%%/issues).
For Oracle Linux users without a paid support subscription, please submit any new feature, fix or update suggestion via a [GitHub issue](%%GITHUB-REPO%%/issues).

View File

@ -1,8 +1,6 @@
# What is Perl?
Perl is a high-level, general-purpose, interpreted, dynamic programming
language. The Perl language borrows features from other programming languages,
including C, shell scripting (sh), AWK, and sed.
Perl is a high-level, general-purpose, interpreted, dynamic programming language. The Perl language borrows features from other programming languages, including C, shell scripting (sh), AWK, and sed.
> [wikipedia.org/wiki/Perl](https://en.wikipedia.org/wiki/Perl)
@ -12,20 +10,18 @@ including C, shell scripting (sh), AWK, and sed.
## Create a `Dockerfile` in your Perl app project
FROM perl:5.20
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "perl", "./your-daemon-or-script.pl" ]
FROM perl:5.20
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "perl", "./your-daemon-or-script.pl" ]
Then, build and run the Docker image:
docker build -t my-perl-app .
docker run -it --rm --name my-running-app my-perl-app
docker build -t my-perl-app .
docker run -it --rm --name my-running-app my-perl-app
## Run a single Perl script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a Perl script by using the
Perl Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Perl script by using the Perl Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp perl:5.20 perl your-daemon-or-script.pl
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp perl:5.20 perl your-daemon-or-script.pl

View File

@ -1,2 +1 @@
View [license information](http://dev.perl.org/licenses/)
for the software contained in this image.
View [license information](http://dev.perl.org/licenses/) for the software contained in this image.

View File

@ -1,89 +1,67 @@
What is Zend Server?
==================
# What is Zend Server?
Zend Server is the integrated application platform for PHP mobile and web apps.
Zend Server provides you with a highly available PHP production environment which includes, amongst other features, a highly reliable PHP stack, application monitoring, troubleshooting, and the all-new Z-Ray.
###Boost your Development with Z-Ray
Using Zend Server Z-Ray is akin to wearing X-Ray goggles, effortlessly giving developers deep insight into how their code is running as they are developing it all without having to change any of their habits or workflow. With Z-Ray, developers can immediately understand the impact of their code changes, enabling them to both improve quality and solve issues long before their code reaches production. In addition to the obvious benefits of this Left Shifting better performance, fewer production issues and faster recovery times using Z-Ray is also downright fun!
###Powering Continuous Delivery
Zend Server is the platform that enables Continuous Delivery, which provides consistency, automation and collaboration capabilities throughout the application delivery cycle. Patterns are available to integrate Zend Server with: Chef, Jenkins, Nagios, Vmware, Puppet.
Zend Server is the integrated application platform for PHP mobile and web apps. Zend Server provides you with a highly available PHP production environment which includes, amongst other features, a highly reliable PHP stack, application monitoring, troubleshooting, and the all-new Z-Ray. ###Boost your Development with Z-Ray Using Zend Server Z-Ray is akin to wearing X-Ray goggles, effortlessly giving developers deep insight into how their code is running as they are developing it all without having to change any of their habits or workflow. With Z-Ray, developers can immediately understand the impact of their code changes, enabling them to both improve quality and solve issues long before their code reaches production. In addition to the obvious benefits of this Left Shifting better performance, fewer production issues and faster recovery times using Z-Ray is also downright fun! ###Powering Continuous Delivery Zend Server is the platform that enables Continuous Delivery, which provides consistency, automation and collaboration capabilities throughout the application delivery cycle. Patterns are available to integrate Zend Server with: Chef, Jenkins, Nagios, Vmware, Puppet.
###Additional Resources
http://www.zend.com/
http://kb.zend.com/
http://files.zend.com/help/Zend-Server/zend-server.htm#faqs.htm
http://files.zend.com/help/Zend-Server/zend-server.htm#getting_started.htm
###Additional Resources http://www.zend.com/ http://kb.zend.com/ http://files.zend.com/help/Zend-Server/zend-server.htm#faqs.htm http://files.zend.com/help/Zend-Server/zend-server.htm#getting_started.htm
PHP-ZendServer
==============
This is a cluster-enabled version of a Dockerized Zend Server 7.0 container.
With Zend Server on Docker, you'll get your PHP applications up and running on a highly available PHP production environment which includes, amongst other features, a highly reliable PHP stack, application monitoring, troubleshooting, and the new and innovative new technology - Z-Ray. Z-Ray gives developers unprecedented visibility into their code by tracking and displaying in a toolbar live and detailed info on how the various elements constructing their page are performing.
# PHP-ZendServer
This is a cluster-enabled version of a Dockerized Zend Server 7.0 container. With Zend Server on Docker, you'll get your PHP applications up and running on a highly available PHP production environment which includes, amongst other features, a highly reliable PHP stack, application monitoring, troubleshooting, and the new and innovative new technology - Z-Ray. Z-Ray gives developers unprecedented visibility into their code by tracking and displaying in a toolbar live and detailed info on how the various elements constructing their page are performing.
## Usage
Usage
-----
#### Launching the Container from Docker-Hub
Zend Server is shared on [Docker-Hub] as **php-zendserver**.
- To start a single Zend Server instance, execute:
$ docker run php-zendserver
Zend Server is shared on [Docker-Hub](https://registry.hub.docker.com/_/php-zendserver/) as **php-zendserver**. - To start a single Zend Server instance, execute:
- You can specify the PHP and Zend Server version by adding ':<php-version>' or ':&lt;ZS-version&gt;-php&lt;version&gt;' to the 'docker run' command. Availible PHP version are 5.4 & 5.5 (5.5 is the default) and Zend Server 7
(for example: php-zendserver:7.0-php5.4).
$ docker run php-zendserver
- To start a Zend Server cluster, execute the following command for each cluster node:
- You can specify the PHP and Zend Server version by adding ':<php-version>' or ':&lt;ZS-version&gt;-php&lt;version&gt;' to the 'docker run' command. Availible PHP version are 5.4 & 5.5 (5.5 is the default) and Zend Server 7 (for example: php-zendserver:7.0-php5.4).
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend php-zendserver
- To start a Zend Server cluster, execute the following command for each cluster node:
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend php-zendserver
#### Launching the Container from Dockerfile
- From a local folder containing this repo's clone, execute the following command to generate the image. The **image-id** will be outputted:
- From a local folder containing this repo's clone, execute the following command to generate the image. The **image-id** will be outputted:
$ docker build .
$ docker build .
- To start a single Zend Server instance, execute:
- To start a single Zend Server instance, execute:
$ docker run <image-id>
$ docker run <image-id>
- To start a Zend Server cluster, execute the following command on each cluster node:
- To start a Zend Server cluster, execute the following command on each cluster node:
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
$ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
#### Accessing Zend server
Once started, the container will output the information required to access the PHP application and the Zend Server UI, including an automatically generated admin password.
To access the container **remotely**, port forwarding must be configured, either manually or using docker.
For example, this command redirects port 80 to port 88, and port 10081 (Zend Server UI port) to port 10088:
To access the container **remotely**, port forwarding must be configured, either manually or using docker. For example, this command redirects port 80 to port 88, and port 10081 (Zend Server UI port) to port 10088:
$ docker run -p 88:80 -p 10088:10081 php-zendserver
$ docker run -p 88:80 -p 10088:10081 php-zendserver
For clustered instances:
$ docker run -p 88:80 -p 10088:10081 -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
$ docker run -p 88:80 -p 10088:10081 -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend <image-id>
Please note, that when running multiple instances only one instance can be bound to a port.
If you are running a cluster, either assign a port redirect to one node only, or assign a different port to each container.
Please note, that when running multiple instances only one instance can be bound to a port. If you are running a cluster, either assign a port redirect to one node only, or assign a different port to each container.
#### Env variables
Env variables are passed in the run command with the "-e" switch.
##### Optional env-variables:
To specify a pre-defined admin password for Zend Server use:
- ZS_ADMIN_PASSWORD
To specify a pre-defined admin password for Zend Server use: - ZS_ADMIN_PASSWORD
MySQL vars for clustered ops. *ALL* are required for the node to properly join a cluster:
- MYSQL_HOSTNAME - ip or hostname of MySQL database
- MYSQL_PORT - MySQL listening port
- MYSQL_USERNAME
- MYSQL_PASSWORD
- MYSQL_DBNAME - Name of the database Zend Server will use for cluster ops (created automatically if it does not exist).
MySQL vars for clustered ops. *ALL* are required for the node to properly join a cluster: - MYSQL_HOSTNAME - ip or hostname of MySQL database - MYSQL_PORT - MySQL listening port - MYSQL_USERNAME - MYSQL_PASSWORD - MYSQL_DBNAME - Name of the database Zend Server will use for cluster ops (created automatically if it does not exist).
To specify a pre-purchased license use the following env vars:
- ZEND_LICENSE_KEY
- ZEND_LICENSE_ORDER
To specify a pre-purchased license use the following env vars: - ZEND_LICENSE_KEY - ZEND_LICENSE_ORDER
### Minimal Requirements
- Each Zend Server Docker container requires 1GB of availible memory.
[Docker-Hub]:https://registry.hub.docker.com/_/php-zendserver/
- Each Zend Server Docker container requires 1GB of availible memory.

View File

@ -1,11 +1,6 @@
# What is PHP?
PHP is a server-side scripting language designed for web development, but which
can also be used as a general-purpose programming language. PHP can be added to
straight HTML or it can be used with a variety of templating engines and web
frameworks. PHP code is usually processed by an interpreter, which is either
implemented as a native module on the web-server or as a common gateway
interface (CGI).
PHP is a server-side scripting language designed for web development, but which can also be used as a general-purpose programming language. PHP can be added to straight HTML or it can be used with a variety of templating engines and web frameworks. PHP code is usually processed by an interpreter, which is either implemented as a native module on the web-server or as a common gateway interface (CGI).
> [wikipedia.org/wiki/PHP](http://en.wikipedia.org/wiki/PHP)
@ -15,80 +10,67 @@ interface (CGI).
## With Command Line
For PHP projects run through the command line interface (CLI), you can do the
following.
For PHP projects run through the command line interface (CLI), you can do the following.
### Create a `Dockerfile` in your PHP project
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
FROM php:5.6-cli
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
CMD [ "php", "./your-script.php" ]
Then, run the commands to build and run the Docker image:
docker build -t my-php-app .
docker run -it --rm --name my-running-app my-php-app
docker build -t my-php-app .
docker run -it --rm --name my-running-app my-php-app
### Run a single PHP script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a PHP script by using the PHP
Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a PHP script by using the PHP Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:5.6-cli php your-script.php
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:5.6-cli php your-script.php
## With Apache
More commonly, you will probably want to run PHP in conjunction with Apache
httpd. Conveniently, there's a version of the PHP container that's packaged with
the Apache web server.
More commonly, you will probably want to run PHP in conjunction with Apache httpd. Conveniently, there's a version of the PHP container that's packaged with the Apache web server.
### Create a `Dockerfile` in your PHP project
FROM php:5.6-apache
COPY src/ /var/www/html/
FROM php:5.6-apache
COPY src/ /var/www/html/
Where `src/` is the directory containing all your php code. Then, run the commands to build and run the Docker image:
docker build -t my-php-app .
docker run -it --rm --name my-running-app my-php-app
docker build -t my-php-app .
docker run -it --rm --name my-running-app my-php-app
We recommend that you add a custom `php.ini` configuration. `COPY` it into
`/usr/local/etc/php` by adding one more line to the Dockerfile above and running the
same commands to build and run:
We recommend that you add a custom `php.ini` configuration. `COPY` it into `/usr/local/etc/php` by adding one more line to the Dockerfile above and running the same commands to build and run:
FROM php:5.6-apache
COPY config/php.ini /usr/local/etc/php
COPY src/ /var/www/html/
FROM php:5.6-apache
COPY config/php.ini /usr/local/etc/php
COPY src/ /var/www/html/
Where `src/` is the directory containing all your php code and `config/`
contains your `php.ini` file.
Where `src/` is the directory containing all your php code and `config/` contains your `php.ini` file.
### How to install more PHP extensions
We provide two convenient scripts named `docker-php-ext-configure` and `docker-php-ext-install`, you can use them to
easily install PHP extension.
We provide two convenient scripts named `docker-php-ext-configure` and `docker-php-ext-install`, you can use them to easily install PHP extension.
For example, if you want to have a PHP-FPM image with `iconv`, `mcrypt` and `gd`
extensions, you can inheriting the base image that you like, and write your own
`Dockerfile` like this:
For example, if you want to have a PHP-FPM image with `iconv`, `mcrypt` and `gd` extensions, you can inheriting the base image that you like, and write your own `Dockerfile` like this:
FROM php:5.5-fpm
# Install modules
RUN apt-get update && apt-get install -y \
libmcrypt-dev libpng12-dev libfreetype6-dev libjpeg62-turbo-dev \
&& docker-php-ext-install iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install gd
CMD ["php-fpm"]
FROM php:5.5-fpm
# Install modules
RUN apt-get update && apt-get install -y \
libmcrypt-dev libpng12-dev libfreetype6-dev libjpeg62-turbo-dev \
&& docker-php-ext-install iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install gd
CMD ["php-fpm"]
Remember, you must install dependencies for your extensions manually. If an extension needs custom `configure` arguments,
you can use the `docker-php-ext-configure` script like this example.
Remember, you must install dependencies for your extensions manually. If an extension needs custom `configure` arguments, you can use the `docker-php-ext-configure` script like this example.
### Without a `Dockerfile`
If you don't want to include a `Dockerfile` in your project, it is sufficient to
do the following:
If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
docker run -it --rm --name my-apache-php-app -v "$PWD":/var/www/html php:5.6-apache
docker run -it --rm --name my-apache-php-app -v "$PWD":/var/www/html php:5.6-apache

View File

@ -1,2 +1 @@
View [license information](http://php.net/license/)
for the software contained in this image.
View [license information](http://php.net/license/) for the software contained in this image.

View File

@ -1,28 +1,8 @@
# What is PostgreSQL?
PostgreSQL, often simply "Postgres", is an object-relational database management
system (ORDBMS) with an emphasis on extensibility and standards-compliance. As a
database server, its primary function is to store data, securely and supporting
best practices, and retrieve it later, as requested by other software
applications, be it those on the same computer or those running on another
computer across a network (including the Internet). It can handle workloads
ranging from small single-machine applications to large Internet-facing
applications with many concurrent users. Recent versions also provide
replication of the database itself for security and scalability.
PostgreSQL, often simply "Postgres", is an object-relational database management system (ORDBMS) with an emphasis on extensibility and standards-compliance. As a database server, its primary function is to store data, securely and supporting best practices, and retrieve it later, as requested by other software applications, be it those on the same computer or those running on another computer across a network (including the Internet). It can handle workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users. Recent versions also provide replication of the database itself for security and scalability.
PostgreSQL implements the majority of the SQL:2011 standard, is ACID-compliant
and transactional (including most DDL statements) avoiding locking issues using
multiversion concurrency control (MVCC), provides immunity to dirty reads and
full serializability; handles complex SQL queries using many indexing methods
that are not available in other databases; has updateable views and materialized
views, triggers, foreign keys; supports functions and stored procedures, and
other expandability, and has a large number of extensions written by third
parties. In addition to the possibility of working with the major proprietary
and open source databases, PostgreSQL supports migration from them, by its
extensive standard SQL support and available migration tools. And if proprietary
extensions had been used, by its extensibility that can emulate many through
some built-in and third-party open source compatibility extensions, such as for
Oracle.
PostgreSQL implements the majority of the SQL:2011 standard, is ACID-compliant and transactional (including most DDL statements) avoiding locking issues using multiversion concurrency control (MVCC), provides immunity to dirty reads and full serializability; handles complex SQL queries using many indexing methods that are not available in other databases; has updateable views and materialized views, triggers, foreign keys; supports functions and stored procedures, and other expandability, and has a large number of extensions written by third parties. In addition to the possibility of working with the major proprietary and open source databases, PostgreSQL supports migration from them, by its extensive standard SQL support and available migration tools. And if proprietary extensions had been used, by its extensibility that can emulate many through some built-in and third-party open source compatibility extensions, such as for Oracle.
> [wikipedia.org/wiki/PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL)
@ -32,60 +12,34 @@ Oracle.
## start a postgres instance
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
This image includes `EXPOSE 5432` (the postgres port), so standard container
linking will make it automatically available to the linked containers. The
default `postgres` user and database are created in the entrypoint with
`initdb`.
> The postgres database is a default database meant for use by users, utilities
> and third party applications.
> [postgresql.org/docs](http://www.postgresql.org/docs/9.3/interactive/app-initdb.html)
This image includes `EXPOSE 5432` (the postgres port), so standard container linking will make it automatically available to the linked containers. The default `postgres` user and database are created in the entrypoint with `initdb`. > The postgres database is a default database meant for use by users, utilities > and third party applications. > [postgresql.org/docs](http://www.postgresql.org/docs/9.3/interactive/app-initdb.html)
## connect to it from an application
docker run --name some-app --link some-postgres:postgres -d application-that-uses-postgres
docker run --name some-app --link some-postgres:postgres -d application-that-uses-postgres
## ... or via `psql`
docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
docker run -it --link some-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
## Environment Variables
The PostgreSQL image uses several environment variables which are easy to miss.
While none of the variables are required, they may significantly aid you in
using the image.
The PostgreSQL image uses several environment variables which are easy to miss. While none of the variables are required, they may significantly aid you in using the image.
### `POSTGRES_PASSWORD`
This environment variable is recommend for you to use the PostgreSQL image. This
environment variable sets the superuser password for PostgreSQL. The default
superuser is defined by the `POSTGRES_USER` environment variable. In the above
example, it is being set to "mysecretpassword".
This environment variable is recommend for you to use the PostgreSQL image. This environment variable sets the superuser password for PostgreSQL. The default superuser is defined by the `POSTGRES_USER` environment variable. In the above example, it is being set to "mysecretpassword".
### `POSTGRES_USER`
This optional environment variable is used in conjunction with
`POSTGRES_PASSWORD` to set a user and its password. This variable will create the
specified user with superuser power and a database with the same name. If it is
not specified, then the default user of `postgres` will be used.
This optional environment variable is used in conjunction with `POSTGRES_PASSWORD` to set a user and its password. This variable will create the specified user with superuser power and a database with the same name. If it is not specified, then the default user of `postgres` will be used.
# How to extend this image
If you would like to do additional initialization in an image derived from this
one, add a `*.sh` script under `/docker-entrypoint-initdb.d` (creating the
directory if necessary). After the entrypoint calls `initdb` to create the
default `postgres` user and database, it will source any `*.sh` script found in
that directory to do further initialization before starting the service. If you
need to execute SQL commands as part of your initialization, the use of
Postgres'' [single user
mode](http://www.postgresql.org/docs/9.3/static/app-postgres.html#AEN90580) is
highly recommended.
If you would like to do additional initialization in an image derived from this one, add a `*.sh` script under `/docker-entrypoint-initdb.d` (creating the directory if necessary). After the entrypoint calls `initdb` to create the default `postgres` user and database, it will source any `*.sh` script found in that directory to do further initialization before starting the service. If you need to execute SQL commands as part of your initialization, the use of Postgres'' [single user mode](http://www.postgresql.org/docs/9.3/static/app-postgres.html#AEN90580) is highly recommended.
# Caveats
If there is no database when `postgres` starts in a container, then `postgres` will
create the default database for you. While this is the expected behavior of
`postgres`, this means that it will not accept incoming connections during that
time. This may cause issues when using automation tools, such as `fig`, that
start several containers simultaneously.
If there is no database when `postgres` starts in a container, then `postgres` will create the default database for you. While this is the expected behavior of `postgres`, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as `fig`, that start several containers simultaneously.

View File

@ -1,12 +1,8 @@
# What is PyPy?
PyPy is a Python interpreter and just-in-time compiler. PyPy focuses on speed,
efficiency and compatibility with the original CPython interpreter.
PyPy is a Python interpreter and just-in-time compiler. PyPy focuses on speed, efficiency and compatibility with the original CPython interpreter.
PyPy started out as a Python interpreter written in the Python language itself.
Current PyPy versions are translated from RPython to C code and compiled. The
PyPy JIT (short for "Just In Time") compiler is capable of turning Python code
into machine code at run time.
PyPy started out as a Python interpreter written in the Python language itself. Current PyPy versions are translated from RPython to C code and compiled. The PyPy JIT (short for "Just In Time") compiler is capable of turning Python code into machine code at run time.
> [wikipedia.org/wiki/PyPy](https://en.wikipedia.org/wiki/PyPy)
@ -16,32 +12,27 @@ into machine code at run time.
## Create a `Dockerfile` in your Python app project
FROM pypy:3-onbuild
CMD [ "pypy3", "./your-daemon-or-script.py" ]
FROM pypy:3-onbuild
CMD [ "pypy3", "./your-daemon-or-script.py" ]
or (if you need to use PyPy 2):
FROM pypy:2-onbuild
CMD [ "pypy", "./your-daemon-or-script.py" ]
FROM pypy:2-onbuild
CMD [ "pypy", "./your-daemon-or-script.py" ]
These images include multiple `ONBUILD` triggers, which should be all you need
to bootstrap most applications. The build will `COPY` a `requirements.txt` file,
`RUN pip install` on said file, and then copy the current directory into
`/usr/src/app`.
These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file,`RUN pip install` on said file, and then copy the current directory into`/usr/src/app`.
You can then build and run the Docker image:
docker build -t my-python-app .
docker run -it --rm --name my-running-app my-python-app
docker build -t my-python-app .
docker run -it --rm --name my-running-app my-python-app
## Run a single Python script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a Python script by using the
Python Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:3 pypy3 your-daemon-or-script.py
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:3 pypy3 your-daemon-or-script.py
or (again, if you need to use Python 2):
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:2 pypy your-daemon-or-script.py
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:2 pypy your-daemon-or-script.py

View File

@ -1,3 +1 @@
View [license
information](https://bitbucket.org/pypy/pypy/src/c3ff0dd6252b6ba0d230f3624dbb4aab8973a1d0/LICENSE?at=default)
for software contained in this image.
View [license information](https://bitbucket.org/pypy/pypy/src/c3ff0dd6252b6ba0d230f3624dbb4aab8973a1d0/LICENSE?at=default) for software contained in this image.

View File

@ -1,15 +1,8 @@
# What is Python?
Python is an interpreted, interactive, object-oriented, open-source programming
language. It incorporates modules, exceptions, dynamic typing, very high level
dynamic data types, and classes. Python combines remarkable power with very
clear syntax. It has interfaces to many system calls and libraries, as well as
to various window systems, and is extensible in C or C++. It is also usable as
an extension language for applications that need a programmable interface.
Finally, Python is portable: it runs on many Unix variants, on the Mac, and on
Windows 2000 and later.
Python is an interpreted, interactive, object-oriented, open-source programming language. It incorporates modules, exceptions, dynamic typing, very high level dynamic data types, and classes. Python combines remarkable power with very clear syntax. It has interfaces to many system calls and libraries, as well as to various window systems, and is extensible in C or C++. It is also usable as an extension language for applications that need a programmable interface. Finally, Python is portable: it runs on many Unix variants, on the Mac, and on Windows 2000 and later.
> [wikipedia.org/wiki/Python_(programming_language)](https://en.wikipedia.org/wiki/Python_(programming_language))
> [wikipedia.org/wiki/Python_(programming_language)](https://en.wikipedia.org/wiki/Python_%28programming_language%29)
%%LOGO%%
@ -17,32 +10,27 @@ Windows 2000 and later.
## Create a `Dockerfile` in your Python app project
FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
or (if you need to use Python 2):
FROM python:2-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
FROM python:2-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]
These images include multiple `ONBUILD` triggers, which should be all you need
to bootstrap most applications. The build will `COPY` a `requirements.txt` file,
`RUN pip install` on said file, and then copy the current directory into
`/usr/src/app`.
These images include multiple `ONBUILD` triggers, which should be all you need to bootstrap most applications. The build will `COPY` a `requirements.txt` file, `RUN pip install` on said file, and then copy the current directory into `/usr/src/app`.
You can then build and run the Docker image:
docker build -t my-python-app .
docker run -it --rm --name my-running-app my-python-app
docker build -t my-python-app .
docker run -it --rm --name my-running-app my-python-app
## Run a single Python script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a Python script by using the
Python Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python your-daemon-or-script.py
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python your-daemon-or-script.py
or (again, if you need to use Python 2):
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2 python your-daemon-or-script.py
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2 python your-daemon-or-script.py

View File

@ -1,76 +1,53 @@
# What is R?
R is a system for statistical computation and graphics. It consists of a
language plus a run-time environment with graphics, a debugger, access to
certain system functions, and the ability to run programs stored in script
files.
R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script files.
The R language is widely used among statisticians and data miners for
developing statistical software and data analysis. Polls and surveys of data
miners are showing R's popularity has increased substantially in recent
years.
The R language is widely used among statisticians and data miners for developing statistical software and data analysis. Polls and surveys of data miners are showing R's popularity has increased substantially in recent years.
R is an implementation of the S programming language combined with lexical
scoping semantics inspired by Scheme. S was created by John Chambers while at
Bell Labs. R was created by Ross Ihaka and Robert Gentleman at the University
of Auckland, New Zealand, and is currently developed by the R Development
Core Team, of which Chambers is a member. R is named partly after the first
names of the first two R authors and partly as a play on the name of S.
R is an implementation of the S programming language combined with lexical scoping semantics inspired by Scheme. S was created by John Chambers while at Bell Labs. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently developed by the R Development Core Team, of which Chambers is a member. R is named partly after the first names of the first two R authors and partly as a play on the name of S.
R is a GNU project. The source code for the R software environment is written
primarily in C, Fortran, and R. R is freely available under the GNU General
Public License, and pre-compiled binary versions are provided for various
operating systems. R uses a command line interface; however, several
graphical user interfaces are available for use with R.
R is a GNU project. The source code for the R software environment is written primarily in C, Fortran, and R. R is freely available under the GNU General Public License, and pre-compiled binary versions are provided for various operating systems. R uses a command line interface; however, several graphical user interfaces are available for use with R.
> [R FAQ](http://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f)
> [wikipedia.org/wiki/R_(programming_language)](http://en.wikipedia.org/wiki/R_(programming_language))
> [R FAQ](http://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f), [wikipedia.org/wiki/R_(programming_language)](http://en.wikipedia.org/wiki/R_%28programming_language%29)
%%LOGO%%
# How to use this image
## Interactive R ##
## Interactive R
Launch R directly for interactive work:
docker run -ti --rm r-base
docker run -ti --rm r-base
## Batch mode ##
## Batch mode
Link the working directory to run R batch commands. We recommend specifying a
non-root user when linking a volume to the container to avoid permission
changes, as illustrated here:
Link the working directory to run R batch commands. We recommend specifying a non-root user when linking a volume to the container to avoid permission changes, as illustrated here:
docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker r-base R CMD check .
docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker r-base R CMD check .
Alternatively, just run a bash session on the container first. This allows a
user to run batch commands and also edit and run scripts:
Alternatively, just run a bash session on the container first. This allows a user to run batch commands and also edit and run scripts:
docker run -ti --rm r-base /usr/bin/bash
vim.tiny myscript.R
docker run -ti --rm r-base /usr/bin/bash
vim.tiny myscript.R
Write the script in the container, exit `vim` and run `Rscript`
Rscript myscript.R
Rscript myscript.R
## Dockerfiles ##
## Dockerfiles
Use `r-base` as a base for your own Dockerfiles. For instance, something along
the lines of the following will compile and run your project:
Use `r-base` as a base for your own Dockerfiles. For instance, something along the lines of the following will compile and run your project:
FROM r-base:latest
COPY . /usr/local/src/myscripts
WORKDIR /usr/local/src/myscripts
CMD ["Rscript", "myscript.R"]
FROM r-base:latest
COPY . /usr/local/src/myscripts
WORKDIR /usr/local/src/myscripts
CMD ["Rscript", "myscript.R"]
Build your image with the command:
docker build -t myscript /path/to/Dockerfile
docker build -t myscript /path/to/Dockerfile
Running this container with no command will execute the script. Alternatively, a
user could run this container in interactive or batch mode as described above,
instead of linking volumes.
Running this container with no command will execute the script. Alternatively, a user could run this container in interactive or batch mode as described above, instead of linking volumes.
Further documentation and example use cases can be found at the
[rocker-org](https://github.com/rocker-org/rocker/wiki) project wiki.
Further documentation and example use cases can be found at the [rocker-org](https://github.com/rocker-org/rocker/wiki) project wiki.

View File

@ -1,2 +1 @@
View [R-project license information](http://www.r-project.org/Licenses/) for the
software contained in this image.
View [R-project license information](http://www.r-project.org/Licenses/) for the software contained in this image.

View File

@ -1,19 +1,11 @@
## Issues
If you have any problems with or questions about this image, please contact us
%%MAILING-LIST%% through a [GitHub issue](%%GITHUB-REPO%%/issues).
If you have any problems with or questions about this image, please contact us %%MAILING-LIST%% through a [GitHub issue](%%GITHUB-REPO%%/issues).
You can also reach us by email via email at
`rocker-maintainers@eddelbuettel.com`.
You can also reach us by email via email at `rocker-maintainers@eddelbuettel.com`.
## Contributing
You are invited to contribute new features, fixes, or updates, large or small;
we are always thrilled to receive pull requests, and do our best to process them
as fast as we can.
You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.
Before you start to code, we recommend discussing your plans %%MAILING-LIST%%
through a [GitHub issue](%%GITHUB-REPO%%/issues), especially for more ambitious
contributions. This gives other contributors a chance to point you in the right
direction, give you feedback on your design, and help you find out if someone
else is working on the same thing.
Before you start to code, we recommend discussing your plans %%MAILING-LIST%% through a [GitHub issue](%%GITHUB-REPO%%/issues), especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your design, and help you find out if someone else is working on the same thing.

View File

@ -1,11 +1,6 @@
# What is RabbitMQ?
RabbitMQ is open source message broker software (sometimes called
message-oriented middleware) that implements the Advanced Message Queuing
Protocol (AMQP). The RabbitMQ server is written in the Erlang programming
language and is built on the Open Telecom Platform framework for clustering and
failover. Client libraries to interface with the broker are available for all
major programming languages.
RabbitMQ is open source message broker software (sometimes called message-oriented middleware) that implements the Advanced Message Queuing Protocol (AMQP). The RabbitMQ server is written in the Erlang programming language and is built on the Open Telecom Platform framework for clustering and failover. Client libraries to interface with the broker are available for all major programming languages.
> [wikipedia.org/wiki/RabbitMQ](https://en.wikipedia.org/wiki/RabbitMQ)
@ -15,47 +10,35 @@ major programming languages.
## Running the daemon
One of the important things to note about RabbitMQ is that it stores data based
on what it calls the "Node Name", which defaults to the hostname. What this
means for usage in Docker is that we should either specify `-h`/`--hostname` or
`-e RABBITMQ_NODENAME=...` explicitly for each daemon so that we don't get a
random hostname and can keep track of our data:
One of the important things to note about RabbitMQ is that it stores data based on what it calls the "Node Name", which defaults to the hostname. What this means for usage in Docker is that we should either specify `-h`/`--hostname` or `-e RABBITMQ_NODENAME=...` explicitly for each daemon so that we don't get a random hostname and can keep track of our data:
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit rabbitmq:3
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit rabbitmq:3
If you give that a minute, then do `docker logs some-rabbit`, you'll see in the
output a block similar to:
If you give that a minute, then do `docker logs some-rabbit`, you'll see in the output a block similar to:
=INFO REPORT==== 31-Dec-2014::23:21:09 ===
node : my-rabbit@988c28b0eb2e
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config (not found)
cookie hash : IFQiLgiJ4goGJrdsLJvN7A==
log : undefined
sasl log : undefined
database dir : /var/lib/rabbitmq/mnesia/my-rabbit
=INFO REPORT==== 31-Dec-2014::23:21:09 ===
node : my-rabbit@988c28b0eb2e
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config (not found)
cookie hash : IFQiLgiJ4goGJrdsLJvN7A==
log : undefined
sasl log : undefined
database dir : /var/lib/rabbitmq/mnesia/my-rabbit
Note the `database dir` there, especially that it has my `RABBITMQ_NODENAME`
appended to the end for the file storage. This image makes all of
`/var/lib/rabbitmq` a volume by default.
Note the `database dir` there, especially that it has my `RABBITMQ_NODENAME` appended to the end for the file storage. This image makes all of `/var/lib/rabbitmq` a volume by default.
### Management Plugin
There is a second set of tags provided with the [management
plugin](https://www.rabbitmq.com/management.html) installed and enabled by
default, which is available on the standard management port of 15672, with the
default username and password of `guest` / `guest`:
There is a second set of tags provided with the [management plugin](https://www.rabbitmq.com/management.html) installed and enabled by default, which is available on the standard management port of 15672, with the default username and password of `guest` / `guest`:
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit rabbitmq:3-management
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit rabbitmq:3-management
You can access it by visiting `http://container-ip:15672` in a browser or, if
you need access outside the host, on port 8080:
You can access it by visiting `http://container-ip:15672` in a browser or, if you need access outside the host, on port 8080:
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a
browser.
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser.
## Connecting to the daemon
docker run --name some-app --link some-rabbit:rabbit -d application-that-uses-rabbitmq
docker run --name some-app --link some-rabbit:rabbit -d application-that-uses-rabbitmq

View File

@ -1,2 +1 @@
View [license information](https://www.rabbitmq.com/mpl.html) for the software
contained in this image.
View [license information](https://www.rabbitmq.com/mpl.html) for the software contained in this image.

View File

@ -1,11 +1,6 @@
# What is Ruby on Rails?
Ruby on Rails or, simply, Rails is an open source web application framework
which runs on the Ruby programming language. It is a full-stack framework. This
means that "out of the box", Rails can create pages and applications that gather
information from a web server, talk to or query a database, and render
templates. As a result, Rails features a routing system that is independent of
the web server.
Ruby on Rails or, simply, Rails is an open source web application framework which runs on the Ruby programming language. It is a full-stack framework. This means that "out of the box", Rails can create pages and applications that gather information from a web server, talk to or query a database, and render templates. As a result, Rails features a routing system that is independent of the web server.
> [wikipedia.org/wiki/Ruby_on_Rails](https://en.wikipedia.org/wiki/Ruby_on_Rails)
@ -15,40 +10,34 @@ the web server.
## Create a `Dockerfile` in your Rails app project
FROM rails:onbuild
FROM rails:onbuild
Put this file in the root of your app, next to the `Gemfile`.
This image includes multiple `ONBUILD` triggers which should cover most
applications. The build will `COPY . /usr/src/app`, `RUN bundle install`,
`EXPOSE 3000`, and set the default command to `rails server`.
This image includes multiple `ONBUILD` triggers which should cover most applications. The build will `COPY . /usr/src/app`, `RUN bundle install`, `EXPOSE 3000`, and set the default command to `rails server`.
You can then build and run the Docker image:
docker build -t my-rails-app .
docker run --name some-rails-app -d my-rails-app
docker build -t my-rails-app .
docker run --name some-rails-app -d my-rails-app
You can test it by visiting `http://container-ip:3000` in a browser or, if you
need access outside the host, on port 8080:
You can test it by visiting `http://container-ip:3000` in a browser or, if you need access outside the host, on port 8080:
docker run --name some-rails-app -p 8080:3000 -d my-rails-app
docker run --name some-rails-app -p 8080:3000 -d my-rails-app
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a
browser.
You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser.
### Generate a `Gemfile.lock`
The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker
run` will help you generate one. Run it in the root of your app, next to the
`Gemfile`:
run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
## Bootstrap a new Rails application
If you want to generate the scaffolding for a new Rails project, you can do the
following:
If you want to generate the scaffolding for a new Rails project, you can do the following:
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app rails rails new webapp
docker run -it --rm --user "$(id -u):$(id -g)" -v "$PWD":/usr/src/app -w /usr/src/app rails rails new webapp
This will create a sub-directory named `webapp` inside your current directory.

View File

@ -1,2 +1 @@
View [license information](https://github.com/rails/rails#license)
for the software contained in this image.
View [license information](https://github.com/rails/rails#license) for the software contained in this image.

View File

@ -1,10 +1,6 @@
# What is Redis?
Redis is an open-source, networked, in-memory, key-value data store with
optional durability. It is written in ANSI C. The development of Redis has been
sponsored by Pivotal since May 2013; before that, it was sponsored by VMware.
According to the monthly ranking by DB-Engines.com, Redis is the most popular
key-value store. The name Redis means REmote DIctionary Server.
Redis is an open-source, networked, in-memory, key-value data store with optional durability. It is written in ANSI C. The development of Redis has been sponsored by Pivotal since May 2013; before that, it was sponsored by VMware. According to the monthly ranking by DB-Engines.com, Redis is the most popular key-value store. The name Redis means REmote DIctionary Server.
> [wikipedia.org/wiki/Redis](https://en.wikipedia.org/wiki/Redis)
@ -14,45 +10,36 @@ key-value store. The name Redis means REmote DIctionary Server.
## start a redis instance
docker run --name some-redis -d redis
docker run --name some-redis -d redis
This image includes `EXPOSE 6379` (the redis port), so standard container
linking will make it automatically available to the linked containers (as the
following examples illustrate).
This image includes `EXPOSE 6379` (the redis port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
## start with persistent storage
docker run --name some-redis -d redis redis-server --appendonly yes
docker run --name some-redis -d redis redis-server --appendonly yes
If persistence is enabled, data is stored in the `VOLUME /data`, which can be
used with `--volumes-from some-volume-container` or `-v /docker/host/dir:/data`
(see [docs.docker volumes](http://docs.docker.com/userguide/dockervolumes/)).
If persistence is enabled, data is stored in the `VOLUME /data`, which can be used with `--volumes-from some-volume-container` or `-v /docker/host/dir:/data` (see [docs.docker volumes](http://docs.docker.com/userguide/dockervolumes/)).
For more about Redis Persistence, see
[http://redis.io/topics/persistence](http://redis.io/topics/persistence).
For more about Redis Persistence, see [http://redis.io/topics/persistence](http://redis.io/topics/persistence).
## connect to it from an application
docker run --name some-app --link some-redis:redis -d application-that-uses-redis
docker run --name some-app --link some-redis:redis -d application-that-uses-redis
## ... or via `redis-cli`
docker run -it --link some-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
docker run -it --link some-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
## Additionally, If you want to use your own redis.conf ...
You can create your own Dockerfile that adds a redis.conf from the context into
/data/, like so.
You can create your own Dockerfile that adds a redis.conf from the context into /data/, like so.
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
Alternatively, you can specify something along the same lines with `docker run`
options.
Alternatively, you can specify something along the same lines with `docker run` options.
docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis /usr/local/etc/redis/redis.conf
docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis /usr/local/etc/redis/redis.conf
Where `/myredis/conf/` is a local directory containing your `redis.conf` file.
Using this method means that there is no need for you to have a Dockerfile for
your redis container.
Where `/myredis/conf/` is a local directory containing your `redis.conf` file. Using this method means that there is no need for you to have a Dockerfile for your redis container.

View File

@ -1,2 +1 @@
View [license information](http://redis.io/topics/license) for the software
contained in this image.
View [license information](http://redis.io/topics/license) for the software contained in this image.

View File

@ -1,29 +1,24 @@
# Docker Registry
See comprehensive documentation on our [GitHub
page](https://github.com/docker/docker-registry).
See comprehensive documentation on our [GitHub page](https://github.com/docker/docker-registry).
## Run the Registry
### Recommended: run the registry docker container
* install docker according to the [following
instructions](http://docs.docker.io/installation/#installation)
* run the registry: `docker run -p 5000:5000 registry`
- install docker according to the [following instructions](http://docs.docker.io/installation/#installation)
- run the registry: `docker run -p 5000:5000 registry`
or
```
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=acme-docker \
-e STORAGE_PATH=/registry \
-e AWS_KEY=AKIAHSHB43HS3J92MXZ \
-e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
registry
```
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=acme-docker \
-e STORAGE_PATH=/registry \
-e AWS_KEY=AKIAHSHB43HS3J92MXZ \
-e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
registry
NOTE: The container will try to allocate the port 5000. If the port is already
taken, find out which container is already using it by running `docker ps`.
NOTE: The container will try to allocate the port 5000. If the port is already taken, find out which container is already using it by running `docker ps`.

View File

@ -1,9 +1,6 @@
# What is RethinkDB?
RethinkDB is an open-source, distributed database built to store JSON documents
and effortlessly scale to multiple machines. It's easy to set up and learn and
features a simple but powerful query language that supports table joins,
groupings, aggregations, and functions.
RethinkDB is an open-source, distributed database built to store JSON documents and effortlessly scale to multiple machines. It's easy to set up and learn and features a simple but powerful query language that supports table joins, groupings, aggregations, and functions.
%%LOGO%%
@ -11,9 +8,7 @@ groupings, aggregations, and functions.
## Start an instance with data mounted in the working directory
The default CMD of the image is `rethinkdb --bind all`, so the RethinkDB daemon
will bind to all network interfaces available to the container (by default,
RethinkDB only accepts connections from `localhost`).
The default CMD of the image is `rethinkdb --bind all`, so the RethinkDB daemon will bind to all network interfaces available to the container (by default, RethinkDB only accepts connections from `localhost`).
```bash
docker run --name some-rethink -v "$PWD:/data" -d rethinkdb
@ -50,5 +45,4 @@ kill $(lsof -t -i @localhost:8080 -sTCP:listen)
## Configuration
See the [official docs](http://www.rethinkdb.com/docs/) for infomation on using
and configuring a RethinkDB cluster.
See the [official docs](http://www.rethinkdb.com/docs/) for infomation on using and configuring a RethinkDB cluster.

View File

@ -1,3 +1 @@
View [license information][AGPLv3] for the software contained in this image.
[AGPLv3]: http://www.gnu.org/licenses/agpl-3.0.html
View [license information](http://www.gnu.org/licenses/agpl-3.0.html) for the software contained in this image.

View File

@ -1,12 +1,8 @@
# What is Ruby?
Ruby is a dynamic, reflective, object-oriented, general-purpose, open-source
programming language. According to its authors, Ruby was influenced by Perl,
Smalltalk, Eiffel, Ada, and Lisp. It supports multiple programming paradigms,
including functional, object-oriented, and imperative. It also has a dynamic
type system and automatic memory management.
Ruby is a dynamic, reflective, object-oriented, general-purpose, open-source programming language. According to its authors, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, and Lisp. It supports multiple programming paradigms, including functional, object-oriented, and imperative. It also has a dynamic type system and automatic memory management.
> [wikipedia.org/wiki/Ruby_(programming_language)](https://en.wikipedia.org/wiki/Ruby_(programming_language))
> [wikipedia.org/wiki/Ruby_(programming_language)](https://en.wikipedia.org/wiki/Ruby_%28programming_language%29)
%%LOGO%%
@ -14,32 +10,27 @@ type system and automatic memory management.
## Create a `Dockerfile` in your Ruby app project
FROM ruby:2.1-onbuild
CMD ["./your-daemon-or-script.rb"]
FROM ruby:2.1-onbuild
CMD ["./your-daemon-or-script.rb"]
Put this file in the root of your app, next to the `Gemfile`.
This image includes multiple `ONBUILD` triggers which should be all you need to
bootstrap most applications. The build will `COPY . /usr/src/app` and `RUN
This image includes multiple `ONBUILD` triggers which should be all you need to bootstrap most applications. The build will `COPY . /usr/src/app` and `RUN
bundle install`.
You can then build and run the Ruby image:
docker build -t my-ruby-app .
docker run -it --name my-running-script my-ruby-app
docker build -t my-ruby-app .
docker run -it --name my-running-script my-ruby-app
### Generate a `Gemfile.lock`
The `onbuid` tag expects a `Gemfile.lock` in your app directory. This `docker
run` will help you generate one. Run it in the root of your app, next to the
`Gemfile`:
The `onbuid` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
## Run a single Ruby script
For many simple, single file projects, you may find it inconvenient to write a
complete `Dockerfile`. In such cases, you can run a Ruby script by using the
Ruby Docker image directly:
For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb

View File

@ -1,2 +1 @@
View [license information](https://www.ruby-lang.org/en/about/license.txt)
for the software contained in this image.
View [license information](https://www.ruby-lang.org/en/about/license.txt) for the software contained in this image.

View File

@ -1,5 +1,3 @@
# `FROM scratch`
This image is most useful in the context of building base images or super
minimal images (such as images that contain only a single binary; see
[`hello-world`](https://github.com/docker-library/hello-world) for an example).
This image is most useful in the context of building base images or super minimal images (such as images that contain only a single binary; see [`hello-world`](https://github.com/docker-library/hello-world) for an example).

View File

@ -1,8 +1,6 @@
# What is Sentry?
Sentry is a realtime event logging and aggregation platform. It specializes in
monitoring errors and extracting all the information needed to do a proper
post-mortem without any of the hassle of the standard user feedback loop.
Sentry is a realtime event logging and aggregation platform. It specializes in monitoring errors and extracting all the information needed to do a proper post-mortem without any of the hassle of the standard user feedback loop.
> [github.com/getsentry/sentry](https://github.com/getsentry/sentry)
@ -14,33 +12,26 @@ post-mortem without any of the hassle of the standard user feedback loop.
### PostgreSQL database (as recommended by upstream)
docker run --name some-sentry --link some-postgres:postgres -d sentry
docker run --name some-sentry --link some-postgres:postgres -d sentry
### MySQL database
docker run --name some-sentry --link some-mysql:mysql -d sentry
docker run --name some-sentry --link some-mysql:mysql -d sentry
### Redis buffering (recommended by upstream for any real workloads)
To enable Update Buffers using Redis, just add `--link some-redis:redis` to the
`docker run` arguments of your service.
To enable Update Buffers using Redis, just add `--link some-redis:redis` to the `docker run` arguments of your service.
### port mapping
If you'd like to be able to access the instance from the host without the
container's IP, standard port mappings can be used. Just add `-p 8080:9000` to
the `docker run` arguments and then access either `http://localhost:8080` or
`http://host-ip:8080` in a browser.
If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used. Just add `-p 8080:9000` to the `docker run` arguments and then access either `http://localhost:8080` or `http://host-ip:8080` in a browser.
## configuring the initial user
The following assumes you chose PostgreSQL. If you did not, just replace the
`--link` entries appropriately:
The following assumes you chose PostgreSQL. If you did not, just replace the `--link` entries appropriately:
docker run -it --rm --link some-postgres:postgres sentry sentry createsuperuser
docker run -it --rm --link some-postgres:postgres sentry sentry createsuperuser
Once the user is created, you must run the following to give them the proper
teams/access within the database: (replace `<username>` here with whatever was
entered as the "Username" when prompted by `createsuperuser` above)
Once the user is created, you must run the following to give them the proper teams/access within the database: (replace `<username>` here with whatever was entered as the "Username" when prompted by `createsuperuser` above)
docker run -it --rm --link some-postgres:postgres sentry sentry repair --owner=<username>
docker run -it --rm --link some-postgres:postgres sentry sentry repair --owner=<username>

View File

@ -1,3 +1 @@
View [license
information](https://github.com/getsentry/sentry/blob/master/LICENSE) for the
software contained in this image.
View [license information](https://github.com/getsentry/sentry/blob/master/LICENSE) for the software contained in this image.

View File

@ -2,18 +2,11 @@
%%LOGO%%
`swarm` is a simple tool which controls a cluster of Docker hosts and exposes it
as a single "virtual" host.
`swarm` is a simple tool which controls a cluster of Docker hosts and exposes it as a single "virtual" host.
`swarm` uses the standard Docker API as its frontend, which means any tool which
speaks Docker can control swarm transparently: dokku, fig, krane, flynn, deis,
docker-ui, shipyard, drone.io, Jenkins... and of course the Docker client itself.
`swarm` uses the standard Docker API as its frontend, which means any tool which speaks Docker can control swarm transparently: dokku, fig, krane, flynn, deis, docker-ui, shipyard, drone.io, Jenkins... and of course the Docker client itself.
Like the other Docker projects, `swarm` follows the "batteries included but removable"
principle. It ships with a simple scheduling backend out of the box, and as initial
development settles, an API will develop to enable pluggable backends. The goal is
to provide a smooth out-of-box experience for simple use cases, and allow swapping
in more powerful backends, like `Mesos`, for large scale production deployments.
Like the other Docker projects, `swarm` follows the "batteries included but removable" principle. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends. The goal is to provide a smooth out-of-box experience for simple use cases, and allow swapping in more powerful backends, like `Mesos`, for large scale production deployments.
# Example usage
@ -42,28 +35,20 @@ $ docker run --rm swarm list token://<cluster_id>
<node_ip:2375>
```
See [here](https://github.com/docker/swarm/blob/master/discovery/README.md) for
more information about other discovery services.
See [here](https://github.com/docker/swarm/blob/master/discovery/README.md) for more information about other discovery services.
## Advanced Scheduling
See [filters]
(https://github.com/docker/swarm/blob/master/scheduler/filter/README.md) and
[strategies]
(https://github.com/docker/swarm/blob/master/scheduler/strategy/README.md)
to learn more about advanced scheduling.
See [filters](https://github.com/docker/swarm/blob/master/scheduler/filter/README.md) and [strategies](https://github.com/docker/swarm/blob/master/scheduler/strategy/README.md) to learn more about advanced scheduling.
## TLS
Swarm supports TLS authentication between the CLI and Swarm but also between
Swarm and the Docker nodes.
Swarm supports TLS authentication between the CLI and Swarm but also between Swarm and the Docker nodes.
In order to enable TLS, the same command line options as Docker can be specified:
`swarm manage --tlsverify --tlscacert=<CACERT> --tlscert=<CERT> --tlskey=<KEY> [...]`
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/)
for more information on how to set up TLS authentication on Docker and generating
the certificates.
Please refer to the [Docker documentation](https://docs.docker.com/articles/https/) for more information on how to set up TLS authentication on Docker and generating the certificates.
Note that Swarm certificates must be generated with`extendedKeyUsage = clientAuth,serverAuth`.
Note that Swarm certificates must be generated with `extendedKeyUsage = clientAuth,serverAuth`.

View File

@ -1,2 +1 @@
View [license information](https://github.com/docker/swarm/blob/master/LICENSE)
for the software contained in this image.
View [license information](https://github.com/docker/swarm/blob/master/LICENSE) for the software contained in this image.

View File

@ -1,21 +1,13 @@
# What Is Thrift
> The Apache Thrift software framework, for scalable cross-language
> services development, combines a software stack with a code generation
> engine to build services that work efficiently and seamlessly between
> C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa,
> JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
> The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
Read more about [Thrift](https://thrift.apache.org).
# How To Use This Image
This is image is intended to run as an executable. Files are provided
by mounting a directory. Here's an example of compiling
`service.thrift` to ruby to the current directory.
This is image is intended to run as an executable. Files are provided by mounting a directory. Here's an example of compiling `service.thrift` to ruby to the current directory.
docker run -v "$PWD:/data" thrift thrift -o /data --gen rb /data/service.thrift
docker run -v "$PWD:/data" thrift thrift -o /data --gen rb /data/service.thrift
Note, that you may want to include `-u $(id -u)` to set the UID on
generated files. The thrift process runs as root by default which will
generate root owned files depending on your docker setup.
Note, that you may want to include `-u $(id -u)` to set the UID on generated files. The thrift process runs as root by default which will generate root owned files depending on your docker setup.

View File

@ -1,20 +0,0 @@
## Issues
If you have any problems with or questions about this image, please contact me
through a [GitHub issue](https://github.com/ahawkins/docker-thrift/issues).
You can also reach many of the official image maintainers via the
`#docker-library` IRC channel on [Freenode](https://freenode.net).
## Contributing
You are invited to contribute new features, fixes, or updates, large
or small; I am always thrilled to receive pull requests, and do my
best to process them as fast as I can.
Before you start to code, I recommend discussing your plans through a
[GitHub issue](https://github.com/ahawkins/docker-thrift/issues),
especially for more ambitious contributions. This gives other
contributors a chance to point you in the right direction, give you
feedback on your design, and help you find out if someone else is
working on the same thing.

View File

@ -1,49 +1,37 @@
# What is Tomcat?
Apache Tomcat (or simply Tomcat) is an open source web server and servlet
container developed by the Apache Software Foundation (ASF). Tomcat implements
the Java Servlet and the JavaServer Pages (JSP) specifications from Oracle, and
provides a "pure Java" HTTP web server environment for Java code to run in. In
the simplest config Tomcat runs in a single operating system process. The
process runs a Java virtual machine (JVM). Every single HTTP request from a
browser to Tomcat is processed in the Tomcat process in a separate thread.
Apache Tomcat (or simply Tomcat) is an open source web server and servlet container developed by the Apache Software Foundation (ASF). Tomcat implements the Java Servlet and the JavaServer Pages (JSP) specifications from Oracle, and provides a "pure Java" HTTP web server environment for Java code to run in. In the simplest config Tomcat runs in a single operating system process. The process runs a Java virtual machine (JVM). Every single HTTP request from a browser to Tomcat is processed in the Tomcat process in a separate thread.
> [wikipedia.org/wiki/Apache_Tomcat](https://en.wikipedia.org/wiki/Apache_Tomcat)
%%LOGO%%
Logo &copy; Apache Software Fountation
%%LOGO%% Logo &copy; Apache Software Fountation
# How to use this image.
Run the default Tomcat server (`CMD ["catalina.sh", "run"]`):
docker run -it --rm tomcat:8.0
docker run -it --rm tomcat:8.0
You can test it by visiting `http://container-ip:8080` in a browser or, if you
need access outside the host, on port 8888:
You can test it by visiting `http://container-ip:8080` in a browser or, if you need access outside the host, on port 8888:
docker run -it --rm -p 8888:8080 tomcat:8.0
docker run -it --rm -p 8888:8080 tomcat:8.0
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a
browser.
You can then go to `http://localhost:8888` or `http://host-ip:8888` in a browser.
The default Tomcat environment in the image for versions 7 and 8 is:
CATALINA_BASE: /usr/local/tomcat
CATALINA_HOME: /usr/local/tomcat
CATALINA_TMPDIR: /usr/local/tomcat/temp
JRE_HOME: /usr
CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
CATALINA_BASE: /usr/local/tomcat
CATALINA_HOME: /usr/local/tomcat
CATALINA_TMPDIR: /usr/local/tomcat/temp
JRE_HOME: /usr
CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
The default Tomcat environment in the image for version 6 is:
CATALINA_BASE: /usr/local/tomcat
CATALINA_HOME: /usr/local/tomcat
CATALINA_TMPDIR: /usr/local/tomcat/temp
JRE_HOME: /usr
CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar
CATALINA_BASE: /usr/local/tomcat
CATALINA_HOME: /usr/local/tomcat
CATALINA_TMPDIR: /usr/local/tomcat/temp
JRE_HOME: /usr
CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar
The configuration files are available in `/usr/local/tomcat/conf/`. By default,
no user is included in the "manager-gui" role required to operate the
"/manager/html" web application. If you wish to use this app, you must define
such a user in `tomcat-users.xml`.
The configuration files are available in `/usr/local/tomcat/conf/`. By default, no user is included in the "manager-gui" role required to operate the "/manager/html" web application. If you wish to use this app, you must define such a user in `tomcat-users.xml`.

View File

@ -1,2 +1 @@
View [license information](https://www.apache.org/licenses/LICENSE-2.0) for the
software contained in this image.
View [license information](https://www.apache.org/licenses/LICENSE-2.0) for the software contained in this image.

View File

@ -1,7 +1,3 @@
# `ubuntu-debootstrap`
This image is the result of running `debootstrap --variant=minbase` against the
currently supported suites of the Ubuntu distribution. It is not official or
supported by Canonical in any way. For an official Ubuntu image that is
supported by Canonical, see
[`ubuntu`](https://registry.hub.docker.com/_/ubuntu/).
This image is the result of running `debootstrap --variant=minbase` against the currently supported suites of the Ubuntu distribution. It is not official or supported by Canonical in any way. For an official Ubuntu image that is supported by Canonical, see [`ubuntu`](https://registry.hub.docker.com/_/ubuntu/)\.

Some files were not shown because too many files have changed in this diff Show More