Merge branch 'master' into bump_v0.8.1

Docker-DCO-1.1-Signed-off-by: Michael Crosby <michael@crosbymichael.com> (github: crosbymichael)
This commit is contained in:
Michael Crosby 2014-02-18 10:51:19 -08:00
commit 049c7effe9
198 changed files with 6327 additions and 2810 deletions

View File

@ -7,8 +7,10 @@ feels wrong or incomplete.
## Reporting Issues ## Reporting Issues
When reporting [issues](https://github.com/dotcloud/docker/issues) When reporting [issues](https://github.com/dotcloud/docker/issues)
on GitHub please include your host OS ( Ubuntu 12.04, Fedora 19, etc... ) on GitHub please include your host OS (Ubuntu 12.04, Fedora 19, etc),
and the output of `docker version` along with the output of `docker info` if possible. the output of `uname -a` and the output of `docker version` along with
the output of `docker info`. Please include the steps required to reproduce
the problem if possible and applicable.
This information will help us review and fix your issue faster. This information will help us review and fix your issue faster.
## Build Environment ## Build Environment
@ -86,6 +88,8 @@ curl -o .git/hooks/pre-commit https://raw.github.com/edsrzf/gofmt-git-hook/maste
Pull requests descriptions should be as clear as possible and include a Pull requests descriptions should be as clear as possible and include a
reference to all the issues that they address. reference to all the issues that they address.
Pull requests mustn't contain commits from other users or branches.
Code review comments may be added to your pull request. Discuss, then make the Code review comments may be added to your pull request. Discuss, then make the
suggested modifications and push additional commits to your feature branch. Be suggested modifications and push additional commits to your feature branch. Be
sure to post a comment after pushing. The new commits will show up in the pull sure to post a comment after pushing. The new commits will show up in the pull
@ -105,6 +109,18 @@ name and email address match your git configuration. The AUTHORS file is
regenerated occasionally from the git commit history, so a mismatch may result regenerated occasionally from the git commit history, so a mismatch may result
in your changes being overwritten. in your changes being overwritten.
### Merge approval
Docker maintainers use LGTM (looks good to me) in comments on the code review
to indicate acceptance.
A change requires LGTMs from an absolute majority of the maintainers of each
component affected. For example, if a change affects docs/ and registry/, it
needs an absolute majority from the maintainers of docs/ AND, separately, an
absolute majority of the maintainers of registry
For more details see [MAINTAINERS.md](hack/MAINTAINERS.md)
### Sign your work ### Sign your work
The sign-off is a simple line at the end of the explanation for the The sign-off is a simple line at the end of the explanation for the
@ -113,7 +129,7 @@ pass it on as an open-source patch. The rules are pretty simple: if you
can certify the below: can certify the below:
``` ```
Docker Developer Grant and Certificate of Origin 1.1 Docker Developer Certificate of Origin 1.1
By making a contribution to the Docker Project ("Project"), I represent and By making a contribution to the Docker Project ("Project"), I represent and
warrant that: warrant that:
@ -163,7 +179,7 @@ If you have any questions, please refer to the FAQ in the [docs](http://docs.doc
* Step 1: learn the component inside out * Step 1: learn the component inside out
* Step 2: make yourself useful by contributing code, bugfixes, support etc. * Step 2: make yourself useful by contributing code, bugfixes, support etc.
* Step 3: volunteer on the irc channel (#docker@freenode) * Step 3: volunteer on the irc channel (#docker@freenode)
* Step 4: propose yourself at a scheduled #docker-meeting * Step 4: propose yourself at a scheduled docker meeting in #docker-dev
Don't forget: being a maintainer is a time investment. Make sure you will have time to make yourself available. Don't forget: being a maintainer is a time investment. Make sure you will have time to make yourself available.
You don't have to be a maintainer to make a difference on the project! You don't have to be a maintainer to make a difference on the project!

6
FIXME
View File

@ -11,20 +11,14 @@ They are just like FIXME comments in the source code, except we're not sure wher
to put them - so we put them here :) to put them - so we put them here :)
* Merge Runtime, Server and Builder into Runtime
* Run linter on codebase * Run linter on codebase
* Unify build commands and regular commands * Unify build commands and regular commands
* Move source code into src/ subdir for clarity * Move source code into src/ subdir for clarity
* docker build: on non-existent local path for ADD, don't show full absolute path on the host * docker build: on non-existent local path for ADD, don't show full absolute path on the host
* docker tag foo REPO:TAG
* use size header for progress bar in pull * use size header for progress bar in pull
* Clean up context upload in build!!! * Clean up context upload in build!!!
* Parallel pull * Parallel pull
* Always generate a resolv.conf per container, to avoid changing resolv.conf under thne container's feet
* Save metadata with import/export (#1974)
* Upgrade dockerd without stopping containers * Upgrade dockerd without stopping containers
* Simple command to remove all untagged images (`docker rmi $(docker images | awk '/^<none>/ { print $3 }')`) * Simple command to remove all untagged images (`docker rmi $(docker images | awk '/^<none>/ { print $3 }')`)
* Simple command to clean up containers for disk space * Simple command to clean up containers for disk space
* Caching after an ADD (#880)
* Clean up the ProgressReader api, it's a PITA to use * Clean up the ProgressReader api, it's a PITA to use
* Use netlink instead of iproute2/iptables (#925)

View File

@ -3,7 +3,7 @@
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD) GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD)
DOCKER_IMAGE := docker:$(GIT_BRANCH) DOCKER_IMAGE := docker:$(GIT_BRANCH)
DOCKER_DOCS_IMAGE := docker-docs:$(GIT_BRANCH) DOCKER_DOCS_IMAGE := docker-docs:$(GIT_BRANCH)
DOCKER_RUN_DOCKER := docker run -rm -i -t -privileged -e TESTFLAGS -v $(CURDIR)/bundles:/go/src/github.com/dotcloud/docker/bundles "$(DOCKER_IMAGE)" DOCKER_RUN_DOCKER := docker run -rm -i -t -privileged -e TESTFLAGS -v "$(CURDIR)/bundles:/go/src/github.com/dotcloud/docker/bundles" "$(DOCKER_IMAGE)"
default: binary default: binary

View File

@ -1 +1 @@
0.8.0 0.8.0-dev

View File

@ -10,6 +10,7 @@ import (
"fmt" "fmt"
"github.com/dotcloud/docker/auth" "github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/engine" "github.com/dotcloud/docker/engine"
"github.com/dotcloud/docker/pkg/listenbuffer"
"github.com/dotcloud/docker/pkg/systemd" "github.com/dotcloud/docker/pkg/systemd"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"github.com/gorilla/mux" "github.com/gorilla/mux"
@ -25,15 +26,28 @@ import (
"strconv" "strconv"
"strings" "strings"
"syscall" "syscall"
"time"
) )
// FIXME: move code common to client and server to common.go
const ( const (
APIVERSION = 1.9 APIVERSION = 1.9
DEFAULTHTTPHOST = "127.0.0.1" DEFAULTHTTPHOST = "127.0.0.1"
DEFAULTHTTPPORT = 4243
DEFAULTUNIXSOCKET = "/var/run/docker.sock" DEFAULTUNIXSOCKET = "/var/run/docker.sock"
) )
var (
activationLock chan struct{}
)
func ValidateHost(val string) (string, error) {
host, err := utils.ParseHost(DEFAULTHTTPHOST, DEFAULTUNIXSOCKET, val)
if err != nil {
return val, err
}
return host, nil
}
type HttpApiFunc func(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error type HttpApiFunc func(eng *engine.Engine, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error
func init() { func init() {
@ -99,6 +113,15 @@ func writeJSON(w http.ResponseWriter, code int, v engine.Env) error {
return v.Encode(w) return v.Encode(w)
} }
func streamJSON(job *engine.Job, w http.ResponseWriter, flush bool) {
w.Header().Set("Content-Type", "application/json")
if flush {
job.Stdout.Add(utils.NewWriteFlusher(w))
} else {
job.Stdout.Add(w)
}
}
func getBoolParam(value string) (bool, error) { func getBoolParam(value string) (bool, error) {
if value == "" { if value == "" {
return false, nil return false, nil
@ -205,7 +228,7 @@ func getImagesJSON(eng *engine.Engine, version float64, w http.ResponseWriter, r
job.Setenv("all", r.Form.Get("all")) job.Setenv("all", r.Form.Get("all"))
if version >= 1.7 { if version >= 1.7 {
job.Stdout.Add(w) streamJSON(job, w, false)
} else if outs, err = job.Stdout.AddListTable(); err != nil { } else if outs, err = job.Stdout.AddListTable(); err != nil {
return err return err
} }
@ -222,13 +245,14 @@ func getImagesJSON(eng *engine.Engine, version float64, w http.ResponseWriter, r
outLegacy := &engine.Env{} outLegacy := &engine.Env{}
outLegacy.Set("Repository", parts[0]) outLegacy.Set("Repository", parts[0])
outLegacy.Set("Tag", parts[1]) outLegacy.Set("Tag", parts[1])
outLegacy.Set("ID", out.Get("ID")) outLegacy.Set("Id", out.Get("Id"))
outLegacy.SetInt64("Created", out.GetInt64("Created")) outLegacy.SetInt64("Created", out.GetInt64("Created"))
outLegacy.SetInt64("Size", out.GetInt64("Size")) outLegacy.SetInt64("Size", out.GetInt64("Size"))
outLegacy.SetInt64("VirtualSize", out.GetInt64("VirtualSize")) outLegacy.SetInt64("VirtualSize", out.GetInt64("VirtualSize"))
outsLegacy.Add(outLegacy) outsLegacy.Add(outLegacy)
} }
} }
w.Header().Set("Content-Type", "application/json")
if _, err := outsLegacy.WriteListTo(w); err != nil { if _, err := outsLegacy.WriteListTo(w); err != nil {
return err return err
} }
@ -256,9 +280,8 @@ func getEvents(eng *engine.Engine, version float64, w http.ResponseWriter, r *ht
return err return err
} }
w.Header().Set("Content-Type", "application/json")
var job = eng.Job("events", r.RemoteAddr) var job = eng.Job("events", r.RemoteAddr)
job.Stdout.Add(utils.NewWriteFlusher(w)) streamJSON(job, w, true)
job.Setenv("since", r.Form.Get("since")) job.Setenv("since", r.Form.Get("since"))
return job.Run() return job.Run()
} }
@ -269,7 +292,7 @@ func getImagesHistory(eng *engine.Engine, version float64, w http.ResponseWriter
} }
var job = eng.Job("history", vars["name"]) var job = eng.Job("history", vars["name"])
job.Stdout.Add(w) streamJSON(job, w, false)
if err := job.Run(); err != nil { if err := job.Run(); err != nil {
return err return err
@ -282,7 +305,7 @@ func getContainersChanges(eng *engine.Engine, version float64, w http.ResponseWr
return fmt.Errorf("Missing parameter") return fmt.Errorf("Missing parameter")
} }
var job = eng.Job("changes", vars["name"]) var job = eng.Job("changes", vars["name"])
job.Stdout.Add(w) streamJSON(job, w, false)
return job.Run() return job.Run()
} }
@ -299,7 +322,7 @@ func getContainersTop(eng *engine.Engine, version float64, w http.ResponseWriter
} }
job := eng.Job("top", vars["name"], r.Form.Get("ps_args")) job := eng.Job("top", vars["name"], r.Form.Get("ps_args"))
job.Stdout.Add(w) streamJSON(job, w, false)
return job.Run() return job.Run()
} }
@ -320,7 +343,7 @@ func getContainersJSON(eng *engine.Engine, version float64, w http.ResponseWrite
job.Setenv("limit", r.Form.Get("limit")) job.Setenv("limit", r.Form.Get("limit"))
if version >= 1.5 { if version >= 1.5 {
job.Stdout.Add(w) streamJSON(job, w, false)
} else if outs, err = job.Stdout.AddTable(); err != nil { } else if outs, err = job.Stdout.AddTable(); err != nil {
return err return err
} }
@ -333,6 +356,7 @@ func getContainersJSON(eng *engine.Engine, version float64, w http.ResponseWrite
ports.ReadListFrom([]byte(out.Get("Ports"))) ports.ReadListFrom([]byte(out.Get("Ports")))
out.Set("Ports", displayablePorts(ports)) out.Set("Ports", displayablePorts(ports))
} }
w.Header().Set("Content-Type", "application/json")
if _, err = outs.WriteListTo(w); err != nil { if _, err = outs.WriteListTo(w); err != nil {
return err return err
} }
@ -366,7 +390,7 @@ func postCommit(eng *engine.Engine, version float64, w http.ResponseWriter, r *h
env engine.Env env engine.Env
job = eng.Job("commit", r.Form.Get("container")) job = eng.Job("commit", r.Form.Get("container"))
) )
if err := config.Import(r.Body); err != nil { if err := config.Decode(r.Body); err != nil {
utils.Errorf("%s", err) utils.Errorf("%s", err)
} }
@ -425,8 +449,12 @@ func postImagesCreate(eng *engine.Engine, version float64, w http.ResponseWriter
job.Stdin.Add(r.Body) job.Stdin.Add(r.Body)
} }
job.SetenvBool("json", version > 1.0) if version > 1.0 {
job.SetenvBool("json", true)
streamJSON(job, w, true)
} else {
job.Stdout.Add(utils.NewWriteFlusher(w)) job.Stdout.Add(utils.NewWriteFlusher(w))
}
if err := job.Run(); err != nil { if err := job.Run(); err != nil {
if !job.Stdout.Used() { if !job.Stdout.Used() {
return err return err
@ -465,7 +493,7 @@ func getImagesSearch(eng *engine.Engine, version float64, w http.ResponseWriter,
var job = eng.Job("search", r.Form.Get("term")) var job = eng.Job("search", r.Form.Get("term"))
job.SetenvJson("metaHeaders", metaHeaders) job.SetenvJson("metaHeaders", metaHeaders)
job.SetenvJson("authConfig", authConfig) job.SetenvJson("authConfig", authConfig)
job.Stdout.Add(w) streamJSON(job, w, false)
return job.Run() return job.Run()
} }
@ -482,8 +510,12 @@ func postImagesInsert(eng *engine.Engine, version float64, w http.ResponseWriter
} }
job := eng.Job("insert", vars["name"], r.Form.Get("url"), r.Form.Get("path")) job := eng.Job("insert", vars["name"], r.Form.Get("url"), r.Form.Get("path"))
job.SetenvBool("json", version > 1.0) if version > 1.0 {
job.SetenvBool("json", true)
streamJSON(job, w, false)
} else {
job.Stdout.Add(w) job.Stdout.Add(w)
}
if err := job.Run(); err != nil { if err := job.Run(); err != nil {
if !job.Stdout.Used() { if !job.Stdout.Used() {
return err return err
@ -532,8 +564,12 @@ func postImagesPush(eng *engine.Engine, version float64, w http.ResponseWriter,
job := eng.Job("push", vars["name"]) job := eng.Job("push", vars["name"])
job.SetenvJson("metaHeaders", metaHeaders) job.SetenvJson("metaHeaders", metaHeaders)
job.SetenvJson("authConfig", authConfig) job.SetenvJson("authConfig", authConfig)
job.SetenvBool("json", version > 1.0) if version > 1.0 {
job.SetenvBool("json", true)
streamJSON(job, w, true)
} else {
job.Stdout.Add(utils.NewWriteFlusher(w)) job.Stdout.Add(utils.NewWriteFlusher(w))
}
if err := job.Run(); err != nil { if err := job.Run(); err != nil {
if !job.Stdout.Used() { if !job.Stdout.Used() {
@ -635,7 +671,7 @@ func deleteImages(eng *engine.Engine, version float64, w http.ResponseWriter, r
return fmt.Errorf("Missing parameter") return fmt.Errorf("Missing parameter")
} }
var job = eng.Job("image_delete", vars["name"]) var job = eng.Job("image_delete", vars["name"])
job.Stdout.Add(w) streamJSON(job, w, false)
job.SetenvBool("autoPrune", version > 1.1) job.SetenvBool("autoPrune", version > 1.1)
return job.Run() return job.Run()
@ -815,7 +851,7 @@ func getContainersByName(eng *engine.Engine, version float64, w http.ResponseWri
return fmt.Errorf("Missing parameter") return fmt.Errorf("Missing parameter")
} }
var job = eng.Job("inspect", vars["name"], "container") var job = eng.Job("inspect", vars["name"], "container")
job.Stdout.Add(w) streamJSON(job, w, false)
job.SetenvBool("conflict", true) //conflict=true to detect conflict between containers and images in the job job.SetenvBool("conflict", true) //conflict=true to detect conflict between containers and images in the job
return job.Run() return job.Run()
} }
@ -825,7 +861,7 @@ func getImagesByName(eng *engine.Engine, version float64, w http.ResponseWriter,
return fmt.Errorf("Missing parameter") return fmt.Errorf("Missing parameter")
} }
var job = eng.Job("inspect", vars["name"], "image") var job = eng.Job("inspect", vars["name"], "image")
job.Stdout.Add(w) streamJSON(job, w, false)
job.SetenvBool("conflict", true) //conflict=true to detect conflict between containers and images in the job job.SetenvBool("conflict", true) //conflict=true to detect conflict between containers and images in the job
return job.Run() return job.Run()
} }
@ -865,11 +901,11 @@ func postBuild(eng *engine.Engine, version float64, w http.ResponseWriter, r *ht
} }
if version >= 1.8 { if version >= 1.8 {
w.Header().Set("Content-Type", "application/json")
job.SetenvBool("json", true) job.SetenvBool("json", true)
} streamJSON(job, w, true)
} else {
job.Stdout.Add(utils.NewWriteFlusher(w)) job.Stdout.Add(utils.NewWriteFlusher(w))
}
job.Stdin.Add(r.Body) job.Stdin.Add(r.Body)
job.Setenv("remote", r.FormValue("remote")) job.Setenv("remote", r.FormValue("remote"))
job.Setenv("t", r.FormValue("t")) job.Setenv("t", r.FormValue("t"))
@ -910,9 +946,12 @@ func postContainersCopy(eng *engine.Engine, version float64, w http.ResponseWrit
} }
job := eng.Job("container_copy", vars["name"], copyData.Get("Resource")) job := eng.Job("container_copy", vars["name"], copyData.Get("Resource"))
job.Stdout.Add(w) streamJSON(job, w, false)
if err := job.Run(); err != nil { if err := job.Run(); err != nil {
utils.Errorf("%s", err.Error()) utils.Errorf("%s", err.Error())
if strings.Contains(err.Error(), "No such container") {
w.WriteHeader(http.StatusNotFound)
}
} }
return nil return nil
} }
@ -1126,7 +1165,7 @@ func ListenAndServe(proto, addr string, eng *engine.Engine, logging, enableCors
} }
} }
l, err := net.Listen(proto, addr) l, err := listenbuffer.NewListenBuffer(proto, addr, activationLock, 15*time.Minute)
if err != nil { if err != nil {
return err return err
} }
@ -1168,8 +1207,15 @@ func ListenAndServe(proto, addr string, eng *engine.Engine, logging, enableCors
// ServeApi loops through all of the protocols sent in to docker and spawns // ServeApi loops through all of the protocols sent in to docker and spawns
// off a go routine to setup a serving http.Server for each. // off a go routine to setup a serving http.Server for each.
func ServeApi(job *engine.Job) engine.Status { func ServeApi(job *engine.Job) engine.Status {
protoAddrs := job.Args var (
chErrors := make(chan error, len(protoAddrs)) protoAddrs = job.Args
chErrors = make(chan error, len(protoAddrs))
)
activationLock = make(chan struct{})
if err := job.Eng.Register("acceptconnections", AcceptConnections); err != nil {
return job.Error(err)
}
for _, protoAddr := range protoAddrs { for _, protoAddr := range protoAddrs {
protoAddrParts := strings.SplitN(protoAddr, "://", 2) protoAddrParts := strings.SplitN(protoAddr, "://", 2)
@ -1186,8 +1232,15 @@ func ServeApi(job *engine.Job) engine.Status {
} }
} }
return engine.StatusOK
}
func AcceptConnections(job *engine.Job) engine.Status {
// Tell the init daemon we are accepting requests // Tell the init daemon we are accepting requests
go systemd.SdNotify("READY=1") go systemd.SdNotify("READY=1")
// close the lock so the listeners start accepting connections
close(activationLock)
return engine.StatusOK return engine.StatusOK
} }

View File

@ -1,21 +1,21 @@
package docker package api
import ( import (
"archive/tar"
"bufio" "bufio"
"bytes" "bytes"
"encoding/base64" "encoding/base64"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"github.com/dotcloud/docker/api"
"github.com/dotcloud/docker/archive" "github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/auth" "github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/dockerversion"
"github.com/dotcloud/docker/engine" "github.com/dotcloud/docker/engine"
"github.com/dotcloud/docker/nat"
flag "github.com/dotcloud/docker/pkg/mflag" flag "github.com/dotcloud/docker/pkg/mflag"
"github.com/dotcloud/docker/pkg/sysinfo"
"github.com/dotcloud/docker/pkg/term" "github.com/dotcloud/docker/pkg/term"
"github.com/dotcloud/docker/registry" "github.com/dotcloud/docker/registry"
"github.com/dotcloud/docker/runconfig"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"io" "io"
"io/ioutil" "io/ioutil"
@ -29,7 +29,6 @@ import (
"reflect" "reflect"
"regexp" "regexp"
"runtime" "runtime"
"sort"
"strconv" "strconv"
"strings" "strings"
"syscall" "syscall"
@ -38,11 +37,6 @@ import (
"time" "time"
) )
var (
GITCOMMIT string
VERSION string
)
var ( var (
ErrConnectionRefused = errors.New("Can't connect to docker daemon. Is 'docker -d' running on this host?") ErrConnectionRefused = errors.New("Can't connect to docker daemon. Is 'docker -d' running on this host?")
) )
@ -80,7 +74,7 @@ func (cli *DockerCli) CmdHelp(args ...string) error {
return nil return nil
} }
} }
help := fmt.Sprintf("Usage: docker [OPTIONS] COMMAND [arg...]\n -H=[unix://%s]: tcp://host:port to bind/connect to or unix://path/to/socket to use\n\nA self-sufficient runtime for linux containers.\n\nCommands:\n", api.DEFAULTUNIXSOCKET) help := fmt.Sprintf("Usage: docker [OPTIONS] COMMAND [arg...]\n -H=[unix://%s]: tcp://host:port to bind/connect to or unix://path/to/socket to use\n\nA self-sufficient runtime for linux containers.\n\nCommands:\n", DEFAULTUNIXSOCKET)
for _, command := range [][]string{ for _, command := range [][]string{
{"attach", "Attach to a running container"}, {"attach", "Attach to a running container"},
{"build", "Build a container from a Dockerfile"}, {"build", "Build a container from a Dockerfile"},
@ -139,35 +133,10 @@ func (cli *DockerCli) CmdInsert(args ...string) error {
return cli.stream("POST", "/images/"+cmd.Arg(0)+"/insert?"+v.Encode(), nil, cli.out, nil) return cli.stream("POST", "/images/"+cmd.Arg(0)+"/insert?"+v.Encode(), nil, cli.out, nil)
} }
// mkBuildContext returns an archive of an empty context with the contents
// of `dockerfile` at the path ./Dockerfile
func MkBuildContext(dockerfile string, files [][2]string) (archive.Archive, error) {
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
files = append(files, [2]string{"Dockerfile", dockerfile})
for _, file := range files {
name, content := file[0], file[1]
hdr := &tar.Header{
Name: name,
Size: int64(len(content)),
}
if err := tw.WriteHeader(hdr); err != nil {
return nil, err
}
if _, err := tw.Write([]byte(content)); err != nil {
return nil, err
}
}
if err := tw.Close(); err != nil {
return nil, err
}
return buf, nil
}
func (cli *DockerCli) CmdBuild(args ...string) error { func (cli *DockerCli) CmdBuild(args ...string) error {
cmd := cli.Subcmd("build", "[OPTIONS] PATH | URL | -", "Build a new container image from the source code at PATH") cmd := cli.Subcmd("build", "[OPTIONS] PATH | URL | -", "Build a new container image from the source code at PATH")
tag := cmd.String([]string{"t", "-tag"}, "", "Repository name (and optionally a tag) to be applied to the resulting image in case of success") tag := cmd.String([]string{"t", "-tag"}, "", "Repository name (and optionally a tag) to be applied to the resulting image in case of success")
suppressOutput := cmd.Bool([]string{"q", "-quiet"}, false, "Suppress verbose build output") suppressOutput := cmd.Bool([]string{"q", "-quiet"}, false, "Suppress the verbose output generated by the containers")
noCache := cmd.Bool([]string{"#no-cache", "-no-cache"}, false, "Do not use cache when building the image") noCache := cmd.Bool([]string{"#no-cache", "-no-cache"}, false, "Do not use cache when building the image")
rm := cmd.Bool([]string{"#rm", "-rm"}, false, "Remove intermediate containers after a successful build") rm := cmd.Bool([]string{"#rm", "-rm"}, false, "Remove intermediate containers after a successful build")
if err := cmd.Parse(args); err != nil { if err := cmd.Parse(args); err != nil {
@ -191,7 +160,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
if err != nil { if err != nil {
return err return err
} }
context, err = MkBuildContext(string(dockerfile), nil) context, err = archive.Generate("Dockerfile", string(dockerfile))
} else if utils.IsURL(cmd.Arg(0)) || utils.IsGIT(cmd.Arg(0)) { } else if utils.IsURL(cmd.Arg(0)) || utils.IsGIT(cmd.Arg(0)) {
isRemote = true isRemote = true
} else { } else {
@ -209,7 +178,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
// FIXME: ProgressReader shouldn't be this annoying to use // FIXME: ProgressReader shouldn't be this annoying to use
if context != nil { if context != nil {
sf := utils.NewStreamFormatter(false) sf := utils.NewStreamFormatter(false)
body = utils.ProgressReader(ioutil.NopCloser(context), 0, cli.err, sf, true, "", "Uploading context") body = utils.ProgressReader(context, 0, cli.err, sf, true, "", "Uploading context")
} }
// Upload the build context // Upload the build context
v := &url.Values{} v := &url.Values{}
@ -266,11 +235,7 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
} }
serverAddress := auth.IndexServerAddress() serverAddress := auth.IndexServerAddress()
if len(cmd.Args()) > 0 { if len(cmd.Args()) > 0 {
serverAddress, err = registry.ExpandAndVerifyRegistryUrl(cmd.Arg(0)) serverAddress = cmd.Arg(0)
if err != nil {
return err
}
fmt.Fprintf(cli.out, "Login against server at %s\n", serverAddress)
} }
promptDefault := func(prompt string, configDefault string) { promptDefault := func(prompt string, configDefault string) {
@ -392,12 +357,12 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
cmd.Usage() cmd.Usage()
return nil return nil
} }
if VERSION != "" { if dockerversion.VERSION != "" {
fmt.Fprintf(cli.out, "Client version: %s\n", VERSION) fmt.Fprintf(cli.out, "Client version: %s\n", dockerversion.VERSION)
} }
fmt.Fprintf(cli.out, "Go version (client): %s\n", runtime.Version()) fmt.Fprintf(cli.out, "Go version (client): %s\n", runtime.Version())
if GITCOMMIT != "" { if dockerversion.GITCOMMIT != "" {
fmt.Fprintf(cli.out, "Git commit (client): %s\n", GITCOMMIT) fmt.Fprintf(cli.out, "Git commit (client): %s\n", dockerversion.GITCOMMIT)
} }
body, _, err := readBody(cli.call("GET", "/version", nil, false)) body, _, err := readBody(cli.call("GET", "/version", nil, false))
@ -422,7 +387,7 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
release := utils.GetReleaseVersion() release := utils.GetReleaseVersion()
if release != "" { if release != "" {
fmt.Fprintf(cli.out, "Last stable version: %s", release) fmt.Fprintf(cli.out, "Last stable version: %s", release)
if (VERSION != "" || remoteVersion.Exists("Version")) && (strings.Trim(VERSION, "-dev") != release || strings.Trim(remoteVersion.Get("Version"), "-dev") != release) { if (dockerversion.VERSION != "" || remoteVersion.Exists("Version")) && (strings.Trim(dockerversion.VERSION, "-dev") != release || strings.Trim(remoteVersion.Get("Version"), "-dev") != release) {
fmt.Fprintf(cli.out, ", please update docker") fmt.Fprintf(cli.out, ", please update docker")
} }
fmt.Fprintf(cli.out, "\n") fmt.Fprintf(cli.out, "\n")
@ -803,7 +768,7 @@ func (cli *DockerCli) CmdPort(args ...string) error {
return err return err
} }
if frontends, exists := out.NetworkSettings.Ports[Port(port+"/"+proto)]; exists && frontends != nil { if frontends, exists := out.NetworkSettings.Ports[nat.Port(port+"/"+proto)]; exists && frontends != nil {
for _, frontend := range frontends { for _, frontend := range frontends {
fmt.Fprintf(cli.out, "%s:%s\n", frontend.HostIp, frontend.HostPort) fmt.Fprintf(cli.out, "%s:%s\n", frontend.HostIp, frontend.HostPort)
} }
@ -1313,19 +1278,6 @@ func (cli *DockerCli) printTreeNode(noTrunc bool, image *engine.Env, prefix stri
} }
} }
func displayablePorts(ports *engine.Table) string {
result := []string{}
for _, port := range ports.Data {
if port.Get("IP") == "" {
result = append(result, fmt.Sprintf("%d/%s", port.GetInt("PublicPort"), port.Get("Type")))
} else {
result = append(result, fmt.Sprintf("%s:%d->%d/%s", port.Get("IP"), port.GetInt("PublicPort"), port.GetInt("PrivatePort"), port.Get("Type")))
}
}
sort.Strings(result)
return strings.Join(result, ", ")
}
func (cli *DockerCli) CmdPs(args ...string) error { func (cli *DockerCli) CmdPs(args ...string) error {
cmd := cli.Subcmd("ps", "[OPTIONS]", "List containers") cmd := cli.Subcmd("ps", "[OPTIONS]", "List containers")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only display numeric IDs") quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only display numeric IDs")
@ -1455,11 +1407,11 @@ func (cli *DockerCli) CmdCommit(args ...string) error {
v.Set("comment", *flComment) v.Set("comment", *flComment)
v.Set("author", *flAuthor) v.Set("author", *flAuthor)
var ( var (
config *Config config *runconfig.Config
env engine.Env env engine.Env
) )
if *flConfig != "" { if *flConfig != "" {
config = &Config{} config = &runconfig.Config{}
if err := json.Unmarshal([]byte(*flConfig), config); err != nil { if err := json.Unmarshal([]byte(*flConfig), config); err != nil {
return err return err
} }
@ -1620,7 +1572,7 @@ func (cli *DockerCli) CmdAttach(args ...string) error {
return err return err
} }
if !container.State.IsRunning() { if !container.State.Running {
return fmt.Errorf("Impossible to attach to a stopped container, start it first") return fmt.Errorf("Impossible to attach to a stopped container, start it first")
} }
@ -1749,210 +1701,9 @@ func (cli *DockerCli) CmdTag(args ...string) error {
return nil return nil
} }
//FIXME Only used in tests
func ParseRun(args []string, sysInfo *sysinfo.SysInfo) (*Config, *HostConfig, *flag.FlagSet, error) {
cmd := flag.NewFlagSet("run", flag.ContinueOnError)
cmd.SetOutput(ioutil.Discard)
cmd.Usage = nil
return parseRun(cmd, args, sysInfo)
}
func parseRun(cmd *flag.FlagSet, args []string, sysInfo *sysinfo.SysInfo) (*Config, *HostConfig, *flag.FlagSet, error) {
var (
// FIXME: use utils.ListOpts for attach and volumes?
flAttach = NewListOpts(ValidateAttach)
flVolumes = NewListOpts(ValidatePath)
flLinks = NewListOpts(ValidateLink)
flEnv = NewListOpts(ValidateEnv)
flPublish ListOpts
flExpose ListOpts
flDns ListOpts
flVolumesFrom ListOpts
flLxcOpts ListOpts
flAutoRemove = cmd.Bool([]string{"#rm", "-rm"}, false, "Automatically remove the container when it exits (incompatible with -d)")
flDetach = cmd.Bool([]string{"d", "-detach"}, false, "Detached mode: Run container in the background, print new container id")
flNetwork = cmd.Bool([]string{"n", "-networking"}, true, "Enable networking for this container")
flPrivileged = cmd.Bool([]string{"#privileged", "-privileged"}, false, "Give extended privileges to this container")
flPublishAll = cmd.Bool([]string{"P", "-publish-all"}, false, "Publish all exposed ports to the host interfaces")
flStdin = cmd.Bool([]string{"i", "-interactive"}, false, "Keep stdin open even if not attached")
flTty = cmd.Bool([]string{"t", "-tty"}, false, "Allocate a pseudo-tty")
flContainerIDFile = cmd.String([]string{"#cidfile", "-cidfile"}, "", "Write the container ID to the file")
flEntrypoint = cmd.String([]string{"#entrypoint", "-entrypoint"}, "", "Overwrite the default entrypoint of the image")
flHostname = cmd.String([]string{"h", "-hostname"}, "", "Container host name")
flMemoryString = cmd.String([]string{"m", "-memory"}, "", "Memory limit (format: <number><optional unit>, where unit = b, k, m or g)")
flUser = cmd.String([]string{"u", "-user"}, "", "Username or UID")
flWorkingDir = cmd.String([]string{"w", "-workdir"}, "", "Working directory inside the container")
flCpuShares = cmd.Int64([]string{"c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
// For documentation purpose
_ = cmd.Bool([]string{"#sig-proxy", "-sig-proxy"}, true, "Proxify all received signal to the process (even in non-tty mode)")
_ = cmd.String([]string{"#name", "-name"}, "", "Assign a name to the container")
)
cmd.Var(&flAttach, []string{"a", "-attach"}, "Attach to stdin, stdout or stderr.")
cmd.Var(&flVolumes, []string{"v", "-volume"}, "Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)")
cmd.Var(&flLinks, []string{"#link", "-link"}, "Add link to another container (name:alias)")
cmd.Var(&flEnv, []string{"e", "-env"}, "Set environment variables")
cmd.Var(&flPublish, []string{"p", "-publish"}, fmt.Sprintf("Publish a container's port to the host (format: %s) (use 'docker port' to see the actual mapping)", PortSpecTemplateFormat))
cmd.Var(&flExpose, []string{"#expose", "-expose"}, "Expose a port from the container without publishing it to your host")
cmd.Var(&flDns, []string{"#dns", "-dns"}, "Set custom dns servers")
cmd.Var(&flVolumesFrom, []string{"#volumes-from", "-volumes-from"}, "Mount volumes from the specified container(s)")
cmd.Var(&flLxcOpts, []string{"#lxc-conf", "-lxc-conf"}, "Add custom lxc options -lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\"")
if err := cmd.Parse(args); err != nil {
return nil, nil, cmd, err
}
// Check if the kernel supports memory limit cgroup.
if sysInfo != nil && *flMemoryString != "" && !sysInfo.MemoryLimit {
*flMemoryString = ""
}
// Validate input params
if *flDetach && flAttach.Len() > 0 {
return nil, nil, cmd, ErrConflictAttachDetach
}
if *flWorkingDir != "" && !path.IsAbs(*flWorkingDir) {
return nil, nil, cmd, ErrInvalidWorikingDirectory
}
if *flDetach && *flAutoRemove {
return nil, nil, cmd, ErrConflictDetachAutoRemove
}
// If neither -d or -a are set, attach to everything by default
if flAttach.Len() == 0 && !*flDetach {
if !*flDetach {
flAttach.Set("stdout")
flAttach.Set("stderr")
if *flStdin {
flAttach.Set("stdin")
}
}
}
var flMemory int64
if *flMemoryString != "" {
parsedMemory, err := utils.RAMInBytes(*flMemoryString)
if err != nil {
return nil, nil, cmd, err
}
flMemory = parsedMemory
}
var binds []string
// add any bind targets to the list of container volumes
for bind := range flVolumes.GetMap() {
if arr := strings.Split(bind, ":"); len(arr) > 1 {
if arr[0] == "/" {
return nil, nil, cmd, fmt.Errorf("Invalid bind mount: source can't be '/'")
}
dstDir := arr[1]
flVolumes.Set(dstDir)
binds = append(binds, bind)
flVolumes.Delete(bind)
} else if bind == "/" {
return nil, nil, cmd, fmt.Errorf("Invalid volume: path can't be '/'")
}
}
var (
parsedArgs = cmd.Args()
runCmd []string
entrypoint []string
image string
)
if len(parsedArgs) >= 1 {
image = cmd.Arg(0)
}
if len(parsedArgs) > 1 {
runCmd = parsedArgs[1:]
}
if *flEntrypoint != "" {
entrypoint = []string{*flEntrypoint}
}
lxcConf, err := parseLxcConfOpts(flLxcOpts)
if err != nil {
return nil, nil, cmd, err
}
var (
domainname string
hostname = *flHostname
parts = strings.SplitN(hostname, ".", 2)
)
if len(parts) > 1 {
hostname = parts[0]
domainname = parts[1]
}
ports, portBindings, err := parsePortSpecs(flPublish.GetAll())
if err != nil {
return nil, nil, cmd, err
}
// Merge in exposed ports to the map of published ports
for _, e := range flExpose.GetAll() {
if strings.Contains(e, ":") {
return nil, nil, cmd, fmt.Errorf("Invalid port format for --expose: %s", e)
}
p := NewPort(splitProtoPort(e))
if _, exists := ports[p]; !exists {
ports[p] = struct{}{}
}
}
config := &Config{
Hostname: hostname,
Domainname: domainname,
PortSpecs: nil, // Deprecated
ExposedPorts: ports,
User: *flUser,
Tty: *flTty,
NetworkDisabled: !*flNetwork,
OpenStdin: *flStdin,
Memory: flMemory,
CpuShares: *flCpuShares,
AttachStdin: flAttach.Get("stdin"),
AttachStdout: flAttach.Get("stdout"),
AttachStderr: flAttach.Get("stderr"),
Env: flEnv.GetAll(),
Cmd: runCmd,
Dns: flDns.GetAll(),
Image: image,
Volumes: flVolumes.GetMap(),
VolumesFrom: strings.Join(flVolumesFrom.GetAll(), ","),
Entrypoint: entrypoint,
WorkingDir: *flWorkingDir,
}
hostConfig := &HostConfig{
Binds: binds,
ContainerIDFile: *flContainerIDFile,
LxcConf: lxcConf,
Privileged: *flPrivileged,
PortBindings: portBindings,
Links: flLinks.GetAll(),
PublishAllPorts: *flPublishAll,
}
if sysInfo != nil && flMemory > 0 && !sysInfo.SwapLimit {
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
config.MemorySwap = -1
}
// When allocating stdin in attached mode, close stdin at client disconnect
if config.OpenStdin && config.AttachStdin {
config.StdinOnce = true
}
return config, hostConfig, cmd, nil
}
func (cli *DockerCli) CmdRun(args ...string) error { func (cli *DockerCli) CmdRun(args ...string) error {
config, hostConfig, cmd, err := parseRun(cli.Subcmd("run", "[OPTIONS] IMAGE [COMMAND] [ARG...]", "Run a command in a new container"), args, nil) // FIXME: just use runconfig.Parse already
config, hostConfig, cmd, err := runconfig.ParseSubcommand(cli.Subcmd("run", "[OPTIONS] IMAGE [COMMAND] [ARG...]", "Run a command in a new container"), args, nil)
if err != nil { if err != nil {
return err return err
} }
@ -1995,12 +1746,7 @@ func (cli *DockerCli) CmdRun(args ...string) error {
stream, statusCode, err := cli.call("POST", "/containers/create?"+containerValues.Encode(), config, false) stream, statusCode, err := cli.call("POST", "/containers/create?"+containerValues.Encode(), config, false)
//if image not found try to pull it //if image not found try to pull it
if statusCode == 404 { if statusCode == 404 {
_, tag := utils.ParseRepositoryTag(config.Image) fmt.Fprintf(cli.err, "Unable to find image '%s' locally\n", config.Image)
if tag == "" {
tag = DEFAULTTAG
}
fmt.Fprintf(cli.err, "Unable to find image '%s' (tag: %s) locally\n", config.Image, tag)
v := url.Values{} v := url.Values{}
repos, tag := utils.ParseRepositoryTag(config.Image) repos, tag := utils.ParseRepositoryTag(config.Image)
@ -2215,6 +1961,9 @@ func (cli *DockerCli) CmdCp(args ...string) error {
if stream != nil { if stream != nil {
defer stream.Close() defer stream.Close()
} }
if statusCode == 404 {
return fmt.Errorf("No such container: %v", info[0])
}
if err != nil { if err != nil {
return err return err
} }
@ -2283,7 +2032,7 @@ func (cli *DockerCli) call(method, path string, data interface{}, passAuthInfo b
re := regexp.MustCompile("/+") re := regexp.MustCompile("/+")
path = re.ReplaceAllString(path, "/") path = re.ReplaceAllString(path, "/")
req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", api.APIVERSION, path), params) req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", APIVERSION, path), params)
if err != nil { if err != nil {
return nil, -1, err return nil, -1, err
} }
@ -2307,7 +2056,7 @@ func (cli *DockerCli) call(method, path string, data interface{}, passAuthInfo b
} }
} }
} }
req.Header.Set("User-Agent", "Docker-Client/"+VERSION) req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.Host = cli.addr req.Host = cli.addr
if data != nil { if data != nil {
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
@ -2337,7 +2086,7 @@ func (cli *DockerCli) call(method, path string, data interface{}, passAuthInfo b
return nil, -1, err return nil, -1, err
} }
if len(body) == 0 { if len(body) == 0 {
return nil, resp.StatusCode, fmt.Errorf("Error :%s", http.StatusText(resp.StatusCode)) return nil, resp.StatusCode, fmt.Errorf("Error: request returned %s for api route and version %s, check if the server supports the requested api version", http.StatusText(resp.StatusCode), req.URL)
} }
return nil, resp.StatusCode, fmt.Errorf("Error: %s", bytes.TrimSpace(body)) return nil, resp.StatusCode, fmt.Errorf("Error: %s", bytes.TrimSpace(body))
} }
@ -2360,11 +2109,11 @@ func (cli *DockerCli) stream(method, path string, in io.Reader, out io.Writer, h
re := regexp.MustCompile("/+") re := regexp.MustCompile("/+")
path = re.ReplaceAllString(path, "/") path = re.ReplaceAllString(path, "/")
req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", api.APIVERSION, path), in) req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", APIVERSION, path), in)
if err != nil { if err != nil {
return err return err
} }
req.Header.Set("User-Agent", "Docker-Client/"+VERSION) req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.Host = cli.addr req.Host = cli.addr
if method == "POST" { if method == "POST" {
req.Header.Set("Content-Type", "plain/text") req.Header.Set("Content-Type", "plain/text")
@ -2405,7 +2154,7 @@ func (cli *DockerCli) stream(method, path string, in io.Reader, out io.Writer, h
return fmt.Errorf("Error: %s", bytes.TrimSpace(body)) return fmt.Errorf("Error: %s", bytes.TrimSpace(body))
} }
if api.MatchesContentType(resp.Header.Get("Content-Type"), "application/json") { if MatchesContentType(resp.Header.Get("Content-Type"), "application/json") {
return utils.DisplayJSONMessagesStream(resp.Body, out, cli.terminalFd, cli.isTerminal) return utils.DisplayJSONMessagesStream(resp.Body, out, cli.terminalFd, cli.isTerminal)
} }
if _, err := io.Copy(out, resp.Body); err != nil { if _, err := io.Copy(out, resp.Body); err != nil {
@ -2424,11 +2173,11 @@ func (cli *DockerCli) hijack(method, path string, setRawTerminal bool, in io.Rea
re := regexp.MustCompile("/+") re := regexp.MustCompile("/+")
path = re.ReplaceAllString(path, "/") path = re.ReplaceAllString(path, "/")
req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", api.APIVERSION, path), nil) req, err := http.NewRequest(method, fmt.Sprintf("/v%g%s", APIVERSION, path), nil)
if err != nil { if err != nil {
return err return err
} }
req.Header.Set("User-Agent", "Docker-Client/"+VERSION) req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.Header.Set("Content-Type", "plain/text") req.Header.Set("Content-Type", "plain/text")
req.Host = cli.addr req.Host = cli.addr
@ -2607,7 +2356,7 @@ func getExitCode(cli *DockerCli, containerId string) (bool, int, error) {
if err := json.Unmarshal(body, c); err != nil { if err := json.Unmarshal(body, c); err != nil {
return false, -1, err return false, -1, err
} }
return c.State.IsRunning(), c.State.GetExitCode(), nil return c.State.Running, c.State.ExitCode, nil
} }
func readBody(stream io.ReadCloser, statusCode int, err error) ([]byte, int, error) { func readBody(stream io.ReadCloser, statusCode int, err error) ([]byte, int, error) {

18
api/container.go Normal file
View File

@ -0,0 +1,18 @@
package api
import (
"github.com/dotcloud/docker/nat"
"github.com/dotcloud/docker/runconfig"
)
type Container struct {
Config runconfig.Config
HostConfig runconfig.HostConfig
State struct {
Running bool
ExitCode int
}
NetworkSettings struct {
Ports nat.PortMap
}
}

View File

@ -1,10 +1,11 @@
package archive package archive
import ( import (
"archive/tar"
"bytes" "bytes"
"code.google.com/p/go/src/pkg/archive/tar"
"compress/bzip2" "compress/bzip2"
"compress/gzip" "compress/gzip"
"errors"
"fmt" "fmt"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"io" "io"
@ -17,14 +18,19 @@ import (
"syscall" "syscall"
) )
type Archive io.Reader type (
Archive io.ReadCloser
type Compression int ArchiveReader io.Reader
Compression int
type TarOptions struct { TarOptions struct {
Includes []string Includes []string
Compression Compression Compression Compression
} }
)
var (
ErrNotImplemented = errors.New("Function not implemented")
)
const ( const (
Uncompressed Compression = iota Uncompressed Compression = iota
@ -60,13 +66,13 @@ func DetectCompression(source []byte) Compression {
return Uncompressed return Uncompressed
} }
func xzDecompress(archive io.Reader) (io.Reader, error) { func xzDecompress(archive io.Reader) (io.ReadCloser, error) {
args := []string{"xz", "-d", "-c", "-q"} args := []string{"xz", "-d", "-c", "-q"}
return CmdStream(exec.Command(args[0], args[1:]...), archive) return CmdStream(exec.Command(args[0], args[1:]...), archive)
} }
func DecompressStream(archive io.Reader) (io.Reader, error) { func DecompressStream(archive io.Reader) (io.ReadCloser, error) {
buf := make([]byte, 10) buf := make([]byte, 10)
totalN := 0 totalN := 0
for totalN < 10 { for totalN < 10 {
@ -85,11 +91,11 @@ func DecompressStream(archive io.Reader) (io.Reader, error) {
switch compression { switch compression {
case Uncompressed: case Uncompressed:
return wrap, nil return ioutil.NopCloser(wrap), nil
case Gzip: case Gzip:
return gzip.NewReader(wrap) return gzip.NewReader(wrap)
case Bzip2: case Bzip2:
return bzip2.NewReader(wrap), nil return ioutil.NopCloser(bzip2.NewReader(wrap)), nil
case Xz: case Xz:
return xzDecompress(wrap) return xzDecompress(wrap)
default: default:
@ -101,7 +107,7 @@ func CompressStream(dest io.WriteCloser, compression Compression) (io.WriteClose
switch compression { switch compression {
case Uncompressed: case Uncompressed:
return dest, nil return utils.NopWriteCloser(dest), nil
case Gzip: case Gzip:
return gzip.NewWriter(dest), nil return gzip.NewWriter(dest), nil
case Bzip2, Xz: case Bzip2, Xz:
@ -180,20 +186,25 @@ func addTarFile(path, name string, tw *tar.Writer) error {
return nil return nil
} }
func createTarFile(path, extractDir string, hdr *tar.Header, reader *tar.Reader) error { func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader) error {
// hdr.Mode is in linux format, which we can use for sycalls,
// but for os.Foo() calls we need the mode converted to os.FileMode,
// so use hdrInfo.Mode() (they differ for e.g. setuid bits)
hdrInfo := hdr.FileInfo()
switch hdr.Typeflag { switch hdr.Typeflag {
case tar.TypeDir: case tar.TypeDir:
// Create directory unless it exists as a directory already. // Create directory unless it exists as a directory already.
// In that case we just want to merge the two // In that case we just want to merge the two
if fi, err := os.Lstat(path); !(err == nil && fi.IsDir()) { if fi, err := os.Lstat(path); !(err == nil && fi.IsDir()) {
if err := os.Mkdir(path, os.FileMode(hdr.Mode)); err != nil { if err := os.Mkdir(path, hdrInfo.Mode()); err != nil {
return err return err
} }
} }
case tar.TypeReg, tar.TypeRegA: case tar.TypeReg, tar.TypeRegA:
// Source is regular file // Source is regular file
file, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, os.FileMode(hdr.Mode)) file, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, hdrInfo.Mode())
if err != nil { if err != nil {
return err return err
} }
@ -236,14 +247,14 @@ func createTarFile(path, extractDir string, hdr *tar.Header, reader *tar.Reader)
return fmt.Errorf("Unhandled tar header type %d\n", hdr.Typeflag) return fmt.Errorf("Unhandled tar header type %d\n", hdr.Typeflag)
} }
if err := syscall.Lchown(path, hdr.Uid, hdr.Gid); err != nil { if err := os.Lchown(path, hdr.Uid, hdr.Gid); err != nil {
return err return err
} }
// There is no LChmod, so ignore mode for symlink. Also, this // There is no LChmod, so ignore mode for symlink. Also, this
// must happen after chown, as that can modify the file mode // must happen after chown, as that can modify the file mode
if hdr.Typeflag != tar.TypeSymlink { if hdr.Typeflag != tar.TypeSymlink {
if err := syscall.Chmod(path, uint32(hdr.Mode&07777)); err != nil { if err := os.Chmod(path, hdrInfo.Mode()); err != nil {
return err return err
} }
} }
@ -251,7 +262,7 @@ func createTarFile(path, extractDir string, hdr *tar.Header, reader *tar.Reader)
ts := []syscall.Timespec{timeToTimespec(hdr.AccessTime), timeToTimespec(hdr.ModTime)} ts := []syscall.Timespec{timeToTimespec(hdr.AccessTime), timeToTimespec(hdr.ModTime)}
// syscall.UtimesNano doesn't support a NOFOLLOW flag atm, and // syscall.UtimesNano doesn't support a NOFOLLOW flag atm, and
if hdr.Typeflag != tar.TypeSymlink { if hdr.Typeflag != tar.TypeSymlink {
if err := syscall.UtimesNano(path, ts); err != nil { if err := UtimesNano(path, ts); err != nil {
return err return err
} }
} else { } else {
@ -264,7 +275,7 @@ func createTarFile(path, extractDir string, hdr *tar.Header, reader *tar.Reader)
// Tar creates an archive from the directory at `path`, and returns it as a // Tar creates an archive from the directory at `path`, and returns it as a
// stream of bytes. // stream of bytes.
func Tar(path string, compression Compression) (io.Reader, error) { func Tar(path string, compression Compression) (io.ReadCloser, error) {
return TarFilter(path, &TarOptions{Compression: compression}) return TarFilter(path, &TarOptions{Compression: compression})
} }
@ -286,7 +297,7 @@ func escapeName(name string) string {
// Tar creates an archive from the directory at `path`, only including files whose relative // Tar creates an archive from the directory at `path`, only including files whose relative
// paths are included in `filter`. If `filter` is nil, then all files are included. // paths are included in `filter`. If `filter` is nil, then all files are included.
func TarFilter(srcPath string, options *TarOptions) (io.Reader, error) { func TarFilter(srcPath string, options *TarOptions) (io.ReadCloser, error) {
pipeReader, pipeWriter := io.Pipe() pipeReader, pipeWriter := io.Pipe()
compressWriter, err := CompressStream(pipeWriter, options.Compression) compressWriter, err := CompressStream(pipeWriter, options.Compression)
@ -332,6 +343,9 @@ func TarFilter(srcPath string, options *TarOptions) (io.Reader, error) {
if err := compressWriter.Close(); err != nil { if err := compressWriter.Close(); err != nil {
utils.Debugf("Can't close compress writer: %s\n", err) utils.Debugf("Can't close compress writer: %s\n", err)
} }
if err := pipeWriter.Close(); err != nil {
utils.Debugf("Can't close pipe writer: %s\n", err)
}
}() }()
return pipeReader, nil return pipeReader, nil
@ -347,12 +361,13 @@ func Untar(archive io.Reader, dest string, options *TarOptions) error {
return fmt.Errorf("Empty archive") return fmt.Errorf("Empty archive")
} }
archive, err := DecompressStream(archive) decompressedArchive, err := DecompressStream(archive)
if err != nil { if err != nil {
return err return err
} }
defer decompressedArchive.Close()
tr := tar.NewReader(archive) tr := tar.NewReader(decompressedArchive)
var dirs []*tar.Header var dirs []*tar.Header
@ -427,15 +442,19 @@ func TarUntar(src string, dst string) error {
if err != nil { if err != nil {
return err return err
} }
defer archive.Close()
return Untar(archive, dst, nil) return Untar(archive, dst, nil)
} }
// UntarPath is a convenience function which looks for an archive // UntarPath is a convenience function which looks for an archive
// at filesystem path `src`, and unpacks it at `dst`. // at filesystem path `src`, and unpacks it at `dst`.
func UntarPath(src, dst string) error { func UntarPath(src, dst string) error {
if archive, err := os.Open(src); err != nil { archive, err := os.Open(src)
if err != nil {
return err return err
} else if err := Untar(archive, dst, nil); err != nil { }
defer archive.Close()
if err := Untar(archive, dst, nil); err != nil {
return err return err
} }
return nil return nil
@ -523,7 +542,7 @@ func CopyFileWithTar(src, dst string) (err error) {
// CmdStream executes a command, and returns its stdout as a stream. // CmdStream executes a command, and returns its stdout as a stream.
// If the command fails to run or doesn't complete successfully, an error // If the command fails to run or doesn't complete successfully, an error
// will be returned, including anything written on stderr. // will be returned, including anything written on stderr.
func CmdStream(cmd *exec.Cmd, input io.Reader) (io.Reader, error) { func CmdStream(cmd *exec.Cmd, input io.Reader) (io.ReadCloser, error) {
if input != nil { if input != nil {
stdin, err := cmd.StdinPipe() stdin, err := cmd.StdinPipe()
if err != nil { if err != nil {

View File

@ -1,8 +1,8 @@
package archive package archive
import ( import (
"archive/tar"
"bytes" "bytes"
"code.google.com/p/go/src/pkg/archive/tar"
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
@ -67,12 +67,13 @@ func tarUntar(t *testing.T, origin string, compression Compression) error {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
defer archive.Close()
buf := make([]byte, 10) buf := make([]byte, 10)
if _, err := archive.Read(buf); err != nil { if _, err := archive.Read(buf); err != nil {
return err return err
} }
archive = io.MultiReader(bytes.NewReader(buf), archive) wrap := io.MultiReader(bytes.NewReader(buf), archive)
detectedCompression := DetectCompression(buf) detectedCompression := DetectCompression(buf)
if detectedCompression.Extension() != compression.Extension() { if detectedCompression.Extension() != compression.Extension() {
@ -84,7 +85,7 @@ func tarUntar(t *testing.T, origin string, compression Compression) error {
return err return err
} }
defer os.RemoveAll(tmp) defer os.RemoveAll(tmp)
if err := Untar(archive, tmp, nil); err != nil { if err := Untar(wrap, tmp, nil); err != nil {
return err return err
} }
if _, err := os.Stat(tmp); err != nil { if _, err := os.Stat(tmp); err != nil {

View File

@ -1,7 +1,7 @@
package archive package archive
import ( import (
"archive/tar" "code.google.com/p/go/src/pkg/archive/tar"
"fmt" "fmt"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"io" "io"

View File

@ -1,8 +1,10 @@
package archive package archive
import ( import (
"archive/tar" "code.google.com/p/go/src/pkg/archive/tar"
"fmt"
"io" "io"
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
@ -28,7 +30,7 @@ func timeToTimespec(time time.Time) (ts syscall.Timespec) {
// ApplyLayer parses a diff in the standard layer format from `layer`, and // ApplyLayer parses a diff in the standard layer format from `layer`, and
// applies it to the directory `dest`. // applies it to the directory `dest`.
func ApplyLayer(dest string, layer Archive) error { func ApplyLayer(dest string, layer ArchiveReader) error {
// We need to be able to set any perms // We need to be able to set any perms
oldmask := syscall.Umask(0) oldmask := syscall.Umask(0)
defer syscall.Umask(oldmask) defer syscall.Umask(oldmask)
@ -42,6 +44,9 @@ func ApplyLayer(dest string, layer Archive) error {
var dirs []*tar.Header var dirs []*tar.Header
aufsTempdir := ""
aufsHardlinks := make(map[string]*tar.Header)
// Iterate through the files in the archive. // Iterate through the files in the archive.
for { for {
hdr, err := tr.Next() hdr, err := tr.Next()
@ -72,6 +77,22 @@ func ApplyLayer(dest string, layer Archive) error {
// Skip AUFS metadata dirs // Skip AUFS metadata dirs
if strings.HasPrefix(hdr.Name, ".wh..wh.") { if strings.HasPrefix(hdr.Name, ".wh..wh.") {
// Regular files inside /.wh..wh.plnk can be used as hardlink targets
// We don't want this directory, but we need the files in them so that
// such hardlinks can be resolved.
if strings.HasPrefix(hdr.Name, ".wh..wh.plnk") && hdr.Typeflag == tar.TypeReg {
basename := filepath.Base(hdr.Name)
aufsHardlinks[basename] = hdr
if aufsTempdir == "" {
if aufsTempdir, err = ioutil.TempDir("", "dockerplnk"); err != nil {
return err
}
defer os.RemoveAll(aufsTempdir)
}
if err := createTarFile(filepath.Join(aufsTempdir, basename), dest, hdr, tr); err != nil {
return err
}
}
continue continue
} }
@ -96,7 +117,26 @@ func ApplyLayer(dest string, layer Archive) error {
} }
} }
if err := createTarFile(path, dest, hdr, tr); err != nil { srcData := io.Reader(tr)
srcHdr := hdr
// Hard links into /.wh..wh.plnk don't work, as we don't extract that directory, so
// we manually retarget these into the temporary files we extracted them into
if hdr.Typeflag == tar.TypeLink && strings.HasPrefix(filepath.Clean(hdr.Linkname), ".wh..wh.plnk") {
linkBasename := filepath.Base(hdr.Linkname)
srcHdr = aufsHardlinks[linkBasename]
if srcHdr == nil {
return fmt.Errorf("Invalid aufs hardlink")
}
tmpFile, err := os.Open(filepath.Join(aufsTempdir, linkBasename))
if err != nil {
return err
}
defer tmpFile.Close()
srcData = tmpFile
}
if err := createTarFile(path, dest, srcHdr, srcData); err != nil {
return err return err
} }

View File

@ -30,3 +30,10 @@ func LUtimesNano(path string, ts []syscall.Timespec) error {
return nil return nil
} }
func UtimesNano(path string, ts []syscall.Timespec) error {
if err := syscall.UtimesNano(path, ts); err != nil {
return err
}
return nil
}

View File

@ -1,4 +1,4 @@
// +build !linux !amd64 // +build !linux
package archive package archive
@ -13,5 +13,9 @@ func getLastModification(stat *syscall.Stat_t) syscall.Timespec {
} }
func LUtimesNano(path string, ts []syscall.Timespec) error { func LUtimesNano(path string, ts []syscall.Timespec) error {
return nil return ErrNotImplemented
}
func UtimesNano(path string, ts []syscall.Timespec) error {
return ErrNotImplemented
} }

59
archive/wrap.go Normal file
View File

@ -0,0 +1,59 @@
package archive
import (
"bytes"
"code.google.com/p/go/src/pkg/archive/tar"
"io/ioutil"
)
// Generate generates a new archive from the content provided
// as input.
//
// `files` is a sequence of path/content pairs. A new file is
// added to the archive for each pair.
// If the last pair is incomplete, the file is created with an
// empty content. For example:
//
// Generate("foo.txt", "hello world", "emptyfile")
//
// The above call will return an archive with 2 files:
// * ./foo.txt with content "hello world"
// * ./empty with empty content
//
// FIXME: stream content instead of buffering
// FIXME: specify permissions and other archive metadata
func Generate(input ...string) (Archive, error) {
files := parseStringPairs(input...)
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
for _, file := range files {
name, content := file[0], file[1]
hdr := &tar.Header{
Name: name,
Size: int64(len(content)),
}
if err := tw.WriteHeader(hdr); err != nil {
return nil, err
}
if _, err := tw.Write([]byte(content)); err != nil {
return nil, err
}
}
if err := tw.Close(); err != nil {
return nil, err
}
return ioutil.NopCloser(buf), nil
}
func parseStringPairs(input ...string) (output [][2]string) {
output = make([][2]string, 0, len(input)/2+1)
for i := 0; i < len(input); i += 2 {
var pair [2]string
pair[0] = input[i]
if i+1 < len(input) {
pair[1] = input[i+1]
}
output = append(output, pair)
}
return
}

View File

@ -151,12 +151,15 @@ func SaveConfig(configFile *ConfigFile) error {
// try to register/login to the registry server // try to register/login to the registry server
func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, error) { func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, error) {
client := &http.Client{} var (
reqStatusCode := 0 status string
var status string reqBody []byte
var reqBody []byte err error
client = &http.Client{}
reqStatusCode = 0
serverAddress = authConfig.ServerAddress
)
serverAddress := authConfig.ServerAddress
if serverAddress == "" { if serverAddress == "" {
serverAddress = IndexServerAddress() serverAddress = IndexServerAddress()
} }

View File

@ -9,6 +9,7 @@ import (
"github.com/dotcloud/docker/archive" "github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/auth" "github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/registry" "github.com/dotcloud/docker/registry"
"github.com/dotcloud/docker/runconfig"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"io" "io"
"io/ioutil" "io/ioutil"
@ -38,7 +39,7 @@ type buildFile struct {
image string image string
maintainer string maintainer string
config *Config config *runconfig.Config
contextPath string contextPath string
context *utils.TarSum context *utils.TarSum
@ -64,10 +65,13 @@ type buildFile struct {
func (b *buildFile) clearTmp(containers map[string]struct{}) { func (b *buildFile) clearTmp(containers map[string]struct{}) {
for c := range containers { for c := range containers {
tmp := b.runtime.Get(c) tmp := b.runtime.Get(c)
b.runtime.Destroy(tmp) if err := b.runtime.Destroy(tmp); err != nil {
fmt.Fprintf(b.outStream, "Error removing intermediate container %s: %s\n", utils.TruncateID(c), err.Error())
} else {
fmt.Fprintf(b.outStream, "Removing intermediate container %s\n", utils.TruncateID(c)) fmt.Fprintf(b.outStream, "Removing intermediate container %s\n", utils.TruncateID(c))
} }
} }
}
func (b *buildFile) CmdFrom(name string) error { func (b *buildFile) CmdFrom(name string) error {
image, err := b.runtime.repositories.LookupImage(name) image, err := b.runtime.repositories.LookupImage(name)
@ -101,7 +105,7 @@ func (b *buildFile) CmdFrom(name string) error {
} }
} }
b.image = image.ID b.image = image.ID
b.config = &Config{} b.config = &runconfig.Config{}
if image.Config != nil { if image.Config != nil {
b.config = image.Config b.config = image.Config
} }
@ -158,14 +162,14 @@ func (b *buildFile) CmdRun(args string) error {
if b.image == "" { if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to run") return fmt.Errorf("Please provide a source image with `from` prior to run")
} }
config, _, _, err := ParseRun(append([]string{b.image}, b.buildCmdFromJson(args)...), nil) config, _, _, err := runconfig.Parse(append([]string{b.image}, b.buildCmdFromJson(args)...), nil)
if err != nil { if err != nil {
return err return err
} }
cmd := b.config.Cmd cmd := b.config.Cmd
b.config.Cmd = nil b.config.Cmd = nil
MergeConfig(b.config, config) runconfig.Merge(b.config, config)
defer func(cmd []string) { b.config.Cmd = cmd }(cmd) defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
@ -179,11 +183,20 @@ func (b *buildFile) CmdRun(args string) error {
return nil return nil
} }
cid, err := b.run() c, err := b.create()
if err != nil { if err != nil {
return err return err
} }
if err := b.commit(cid, cmd, "run"); err != nil { // Ensure that we keep the container mounted until the commit
// to avoid unmounting and then mounting directly again
c.Mount()
defer c.Unmount()
err = b.run(c)
if err != nil {
return err
}
if err := b.commit(c.ID, cmd, "run"); err != nil {
return err return err
} }
@ -342,7 +355,7 @@ func (b *buildFile) checkPathForAddition(orig string) error {
return nil return nil
} }
func (b *buildFile) addContext(container *Container, orig, dest string) error { func (b *buildFile) addContext(container *Container, orig, dest string, remote bool) error {
var ( var (
origPath = path.Join(b.contextPath, orig) origPath = path.Join(b.contextPath, orig)
destPath = path.Join(container.BasefsPath(), dest) destPath = path.Join(container.BasefsPath(), dest)
@ -358,21 +371,40 @@ func (b *buildFile) addContext(container *Container, orig, dest string) error {
} }
return err return err
} }
if fi.IsDir() { if fi.IsDir() {
if err := archive.CopyWithTar(origPath, destPath); err != nil { if err := archive.CopyWithTar(origPath, destPath); err != nil {
return err return err
} }
return nil
}
// First try to unpack the source as an archive // First try to unpack the source as an archive
} else if err := archive.UntarPath(origPath, destPath); err != nil { // to support the untar feature we need to clean up the path a little bit
// because tar is very forgiving. First we need to strip off the archive's
// filename from the path but this is only added if it does not end in / .
tarDest := destPath
if strings.HasSuffix(tarDest, "/") {
tarDest = filepath.Dir(destPath)
}
// If we are adding a remote file, do not try to untar it
if !remote {
// try to successfully untar the orig
if err := archive.UntarPath(origPath, tarDest); err == nil {
return nil
}
utils.Debugf("Couldn't untar %s to %s: %s", origPath, destPath, err) utils.Debugf("Couldn't untar %s to %s: %s", origPath, destPath, err)
}
// If that fails, just copy it as a regular file // If that fails, just copy it as a regular file
// but do not use all the magic path handling for the tar path
if err := os.MkdirAll(path.Dir(destPath), 0755); err != nil { if err := os.MkdirAll(path.Dir(destPath), 0755); err != nil {
return err return err
} }
if err := archive.CopyWithTar(origPath, destPath); err != nil { if err := archive.CopyWithTar(origPath, destPath); err != nil {
return err return err
} }
}
return nil return nil
} }
@ -399,14 +431,15 @@ func (b *buildFile) CmdAdd(args string) error {
b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", orig, dest)} b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", orig, dest)}
b.config.Image = b.image b.config.Image = b.image
// FIXME: do we really need this?
var ( var (
origPath = orig origPath = orig
destPath = dest destPath = dest
remoteHash string remoteHash string
isRemote bool
) )
if utils.IsURL(orig) { if utils.IsURL(orig) {
isRemote = true
resp, err := utils.Download(orig) resp, err := utils.Download(orig)
if err != nil { if err != nil {
return err return err
@ -435,6 +468,7 @@ func (b *buildFile) CmdAdd(args string) error {
} }
tarSum := utils.TarSum{Reader: r, DisableCompression: true} tarSum := utils.TarSum{Reader: r, DisableCompression: true}
remoteHash = tarSum.Sum(nil) remoteHash = tarSum.Sum(nil)
r.Close()
// If the destination is a directory, figure out the filename. // If the destination is a directory, figure out the filename.
if strings.HasSuffix(dest, "/") { if strings.HasSuffix(dest, "/") {
@ -515,7 +549,7 @@ func (b *buildFile) CmdAdd(args string) error {
} }
defer container.Unmount() defer container.Unmount()
if err := b.addContext(container, origPath, destPath); err != nil { if err := b.addContext(container, origPath, destPath, isRemote); err != nil {
return err return err
} }
@ -554,16 +588,16 @@ func (sf *StderrFormater) Write(buf []byte) (int, error) {
return len(buf), err return len(buf), err
} }
func (b *buildFile) run() (string, error) { func (b *buildFile) create() (*Container, error) {
if b.image == "" { if b.image == "" {
return "", fmt.Errorf("Please provide a source image with `from` prior to run") return nil, fmt.Errorf("Please provide a source image with `from` prior to run")
} }
b.config.Image = b.image b.config.Image = b.image
// Create the container and start it // Create the container and start it
c, _, err := b.runtime.Create(b.config, "") c, _, err := b.runtime.Create(b.config, "")
if err != nil { if err != nil {
return "", err return nil, err
} }
b.tmpContainers[c.ID] = struct{}{} b.tmpContainers[c.ID] = struct{}{}
fmt.Fprintf(b.outStream, " ---> Running in %s\n", utils.TruncateID(c.ID)) fmt.Fprintf(b.outStream, " ---> Running in %s\n", utils.TruncateID(c.ID))
@ -572,6 +606,10 @@ func (b *buildFile) run() (string, error) {
c.Path = b.config.Cmd[0] c.Path = b.config.Cmd[0]
c.Args = b.config.Cmd[1:] c.Args = b.config.Cmd[1:]
return c, nil
}
func (b *buildFile) run(c *Container) error {
var errCh chan error var errCh chan error
if b.verbose { if b.verbose {
@ -582,12 +620,12 @@ func (b *buildFile) run() (string, error) {
//start the container //start the container
if err := c.Start(); err != nil { if err := c.Start(); err != nil {
return "", err return err
} }
if errCh != nil { if errCh != nil {
if err := <-errCh; err != nil { if err := <-errCh; err != nil {
return "", err return err
} }
} }
@ -597,10 +635,10 @@ func (b *buildFile) run() (string, error) {
Message: fmt.Sprintf("The command %v returned a non-zero code: %d", b.config.Cmd, ret), Message: fmt.Sprintf("The command %v returned a non-zero code: %d", b.config.Cmd, ret),
Code: ret, Code: ret,
} }
return "", err return err
} }
return c.ID, nil return nil
} }
// Commit the container <id> with the autorun command <autoCmd> // Commit the container <id> with the autorun command <autoCmd>
@ -742,7 +780,7 @@ func NewBuildFile(srv *Server, outStream, errStream io.Writer, verbose, utilizeC
return &buildFile{ return &buildFile{
runtime: srv.runtime, runtime: srv.runtime,
srv: srv, srv: srv,
config: &Config{}, config: &runconfig.Config{},
outStream: outStream, outStream: outStream,
errStream: errStream, errStream: errStream,
tmpContainers: make(map[string]struct{}), tmpContainers: make(map[string]struct{}),

View File

@ -1,16 +1,17 @@
package docker package docker
import ( import (
"github.com/dotcloud/docker/runconfig"
"strings" "strings"
"testing" "testing"
) )
func parse(t *testing.T, args string) (*Config, *HostConfig, error) { func parse(t *testing.T, args string) (*runconfig.Config, *runconfig.HostConfig, error) {
config, hostConfig, _, err := ParseRun(strings.Split(args+" ubuntu bash", " "), nil) config, hostConfig, _, err := runconfig.Parse(strings.Split(args+" ubuntu bash", " "), nil)
return config, hostConfig, err return config, hostConfig, err
} }
func mustParse(t *testing.T, args string) (*Config, *HostConfig) { func mustParse(t *testing.T, args string) (*runconfig.Config, *runconfig.HostConfig) {
config, hostConfig, err := parse(t, args) config, hostConfig, err := parse(t, args)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -39,6 +39,7 @@ func DaemonConfigFromJob(job *engine.Job) *DaemonConfig {
EnableIptables: job.GetenvBool("EnableIptables"), EnableIptables: job.GetenvBool("EnableIptables"),
EnableIpForward: job.GetenvBool("EnableIpForward"), EnableIpForward: job.GetenvBool("EnableIpForward"),
BridgeIP: job.Getenv("BridgeIP"), BridgeIP: job.Getenv("BridgeIP"),
BridgeIface: job.Getenv("BridgeIface"),
DefaultIp: net.ParseIP(job.Getenv("DefaultIp")), DefaultIp: net.ParseIP(job.Getenv("DefaultIp")),
InterContainerCommunication: job.GetenvBool("InterContainerCommunication"), InterContainerCommunication: job.GetenvBool("InterContainerCommunication"),
GraphDriver: job.Getenv("GraphDriver"), GraphDriver: job.Getenv("GraphDriver"),
@ -51,7 +52,7 @@ func DaemonConfigFromJob(job *engine.Job) *DaemonConfig {
} else { } else {
config.Mtu = GetDefaultNetworkMtu() config.Mtu = GetDefaultNetworkMtu()
} }
config.DisableNetwork = job.Getenv("BridgeIface") == DisableNetworkBridge config.DisableNetwork = config.BridgeIface == DisableNetworkBridge
return config return config
} }

View File

@ -8,8 +8,10 @@ import (
"github.com/dotcloud/docker/engine" "github.com/dotcloud/docker/engine"
"github.com/dotcloud/docker/execdriver" "github.com/dotcloud/docker/execdriver"
"github.com/dotcloud/docker/graphdriver" "github.com/dotcloud/docker/graphdriver"
"github.com/dotcloud/docker/pkg/mount" "github.com/dotcloud/docker/links"
"github.com/dotcloud/docker/nat"
"github.com/dotcloud/docker/pkg/term" "github.com/dotcloud/docker/pkg/term"
"github.com/dotcloud/docker/runconfig"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"github.com/kr/pty" "github.com/kr/pty"
"io" "io"
@ -17,7 +19,6 @@ import (
"log" "log"
"os" "os"
"path" "path"
"path/filepath"
"strings" "strings"
"sync" "sync"
"syscall" "syscall"
@ -27,6 +28,8 @@ import (
var ( var (
ErrNotATTY = errors.New("The PTY is not a file") ErrNotATTY = errors.New("The PTY is not a file")
ErrNoTTY = errors.New("No PTY found") ErrNoTTY = errors.New("No PTY found")
ErrContainerStart = errors.New("The container failed to start. Unknown error")
ErrContainerStartTimeout = errors.New("The container failed to start due to timed out.")
) )
type Container struct { type Container struct {
@ -41,7 +44,7 @@ type Container struct {
Path string Path string
Args []string Args []string
Config *Config Config *runconfig.Config
State State State State
Image string Image string
@ -67,160 +70,12 @@ type Container struct {
// Store rw/ro in a separate structure to preserve reverse-compatibility on-disk. // Store rw/ro in a separate structure to preserve reverse-compatibility on-disk.
// Easier than migrating older container configs :) // Easier than migrating older container configs :)
VolumesRW map[string]bool VolumesRW map[string]bool
hostConfig *HostConfig hostConfig *runconfig.HostConfig
activeLinks map[string]*Link activeLinks map[string]*links.Link
}
// Note: the Config structure should hold only portable information about the container.
// Here, "portable" means "independent from the host we are running on".
// Non-portable information *should* appear in HostConfig.
type Config struct {
Hostname string
Domainname string
User string
Memory int64 // Memory limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap
CpuShares int64 // CPU shares (relative weight vs. other containers)
AttachStdin bool
AttachStdout bool
AttachStderr bool
PortSpecs []string // Deprecated - Can be in the format of 8080/tcp
ExposedPorts map[Port]struct{}
Tty bool // Attach standard streams to a tty, including stdin if it is not closed.
OpenStdin bool // Open stdin
StdinOnce bool // If true, close stdin after the 1 attached client disconnects.
Env []string
Cmd []string
Dns []string
Image string // Name of the image as it was passed by the operator (eg. could be symbolic)
Volumes map[string]struct{}
VolumesFrom string
WorkingDir string
Entrypoint []string
NetworkDisabled bool
OnBuild []string
}
func ContainerConfigFromJob(job *engine.Job) *Config {
config := &Config{
Hostname: job.Getenv("Hostname"),
Domainname: job.Getenv("Domainname"),
User: job.Getenv("User"),
Memory: job.GetenvInt64("Memory"),
MemorySwap: job.GetenvInt64("MemorySwap"),
CpuShares: job.GetenvInt64("CpuShares"),
AttachStdin: job.GetenvBool("AttachStdin"),
AttachStdout: job.GetenvBool("AttachStdout"),
AttachStderr: job.GetenvBool("AttachStderr"),
Tty: job.GetenvBool("Tty"),
OpenStdin: job.GetenvBool("OpenStdin"),
StdinOnce: job.GetenvBool("StdinOnce"),
Image: job.Getenv("Image"),
VolumesFrom: job.Getenv("VolumesFrom"),
WorkingDir: job.Getenv("WorkingDir"),
NetworkDisabled: job.GetenvBool("NetworkDisabled"),
}
job.GetenvJson("ExposedPorts", &config.ExposedPorts)
job.GetenvJson("Volumes", &config.Volumes)
if PortSpecs := job.GetenvList("PortSpecs"); PortSpecs != nil {
config.PortSpecs = PortSpecs
}
if Env := job.GetenvList("Env"); Env != nil {
config.Env = Env
}
if Cmd := job.GetenvList("Cmd"); Cmd != nil {
config.Cmd = Cmd
}
if Dns := job.GetenvList("Dns"); Dns != nil {
config.Dns = Dns
}
if Entrypoint := job.GetenvList("Entrypoint"); Entrypoint != nil {
config.Entrypoint = Entrypoint
}
return config
}
type HostConfig struct {
Binds []string
ContainerIDFile string
LxcConf []KeyValuePair
Privileged bool
PortBindings map[Port][]PortBinding
Links []string
PublishAllPorts bool
}
func ContainerHostConfigFromJob(job *engine.Job) *HostConfig {
hostConfig := &HostConfig{
ContainerIDFile: job.Getenv("ContainerIDFile"),
Privileged: job.GetenvBool("Privileged"),
PublishAllPorts: job.GetenvBool("PublishAllPorts"),
}
job.GetenvJson("LxcConf", &hostConfig.LxcConf)
job.GetenvJson("PortBindings", &hostConfig.PortBindings)
if Binds := job.GetenvList("Binds"); Binds != nil {
hostConfig.Binds = Binds
}
if Links := job.GetenvList("Links"); Links != nil {
hostConfig.Links = Links
}
return hostConfig
}
type BindMap struct {
SrcPath string
DstPath string
Mode string
}
var (
ErrContainerStart = errors.New("The container failed to start. Unknown error")
ErrContainerStartTimeout = errors.New("The container failed to start due to timed out.")
ErrInvalidWorikingDirectory = errors.New("The working directory is invalid. It needs to be an absolute path.")
ErrConflictAttachDetach = errors.New("Conflicting options: -a and -d")
ErrConflictDetachAutoRemove = errors.New("Conflicting options: -rm and -d")
)
type KeyValuePair struct {
Key string
Value string
}
type PortBinding struct {
HostIp string
HostPort string
}
// 80/tcp
type Port string
func (p Port) Proto() string {
parts := strings.Split(string(p), "/")
if len(parts) == 1 {
return "tcp"
}
return parts[1]
}
func (p Port) Port() string {
return strings.Split(string(p), "/")[0]
}
func (p Port) Int() int {
i, err := parsePort(p.Port())
if err != nil {
panic(err)
}
return i
}
func NewPort(proto, port string) Port {
return Port(fmt.Sprintf("%s/%s", port, proto))
} }
// FIXME: move deprecated port stuff to nat to clean up the core.
type PortMapping map[string]string // Deprecated type PortMapping map[string]string // Deprecated
type NetworkSettings struct { type NetworkSettings struct {
@ -229,13 +84,13 @@ type NetworkSettings struct {
Gateway string Gateway string
Bridge string Bridge string
PortMapping map[string]PortMapping // Deprecated PortMapping map[string]PortMapping // Deprecated
Ports map[Port][]PortBinding Ports nat.PortMap
} }
func (settings *NetworkSettings) PortMappingAPI() *engine.Table { func (settings *NetworkSettings) PortMappingAPI() *engine.Table {
var outs = engine.NewTable("", 0) var outs = engine.NewTable("", 0)
for port, bindings := range settings.Ports { for port, bindings := range settings.Ports {
p, _ := parsePort(port.Port()) p, _ := nat.ParsePort(port.Port())
if len(bindings) == 0 { if len(bindings) == 0 {
out := &engine.Env{} out := &engine.Env{}
out.SetInt("PublicPort", p) out.SetInt("PublicPort", p)
@ -245,7 +100,7 @@ func (settings *NetworkSettings) PortMappingAPI() *engine.Table {
} }
for _, binding := range bindings { for _, binding := range bindings {
out := &engine.Env{} out := &engine.Env{}
h, _ := parsePort(binding.HostPort) h, _ := nat.ParsePort(binding.HostPort)
out.SetInt("PrivatePort", p) out.SetInt("PrivatePort", p)
out.SetInt("PublicPort", h) out.SetInt("PublicPort", h)
out.Set("Type", port.Proto()) out.Set("Type", port.Proto())
@ -322,7 +177,7 @@ func (container *Container) ToDisk() (err error) {
} }
func (container *Container) readHostConfig() error { func (container *Container) readHostConfig() error {
container.hostConfig = &HostConfig{} container.hostConfig = &runconfig.HostConfig{}
// If the hostconfig file does not exist, do not read it. // If the hostconfig file does not exist, do not read it.
// (We still have to initialize container.hostConfig, // (We still have to initialize container.hostConfig,
// but that's OK, since we just did that above.) // but that's OK, since we just did that above.)
@ -366,6 +221,7 @@ func (container *Container) setupPty() error {
container.ptyMaster = ptyMaster container.ptyMaster = ptyMaster
container.command.Stdout = ptySlave container.command.Stdout = ptySlave
container.command.Stderr = ptySlave container.command.Stderr = ptySlave
container.command.Console = ptySlave.Name()
// Copy the PTYs to our broadcasters // Copy the PTYs to our broadcasters
go func() { go func() {
@ -637,17 +493,7 @@ func (container *Container) Start() (err error) {
log.Printf("WARNING: IPv4 forwarding is disabled. Networking will not work") log.Printf("WARNING: IPv4 forwarding is disabled. Networking will not work")
} }
if container.Volumes == nil || len(container.Volumes) == 0 { if err := prepareVolumesForContainer(container); err != nil {
container.Volumes = make(map[string]string)
container.VolumesRW = make(map[string]bool)
}
// Apply volumes from another container if requested
if err := container.applyExternalVolumes(); err != nil {
return err
}
if err := container.createVolumes(); err != nil {
return err return err
} }
@ -671,7 +517,7 @@ func (container *Container) Start() (err error) {
} }
if len(children) > 0 { if len(children) > 0 {
container.activeLinks = make(map[string]*Link, len(children)) container.activeLinks = make(map[string]*links.Link, len(children))
// If we encounter an error make sure that we rollback any network // If we encounter an error make sure that we rollback any network
// config and ip table changes // config and ip table changes
@ -682,8 +528,19 @@ func (container *Container) Start() (err error) {
container.activeLinks = nil container.activeLinks = nil
} }
for p, child := range children { for linkAlias, child := range children {
link, err := NewLink(container, child, p, runtime.eng) if !child.State.IsRunning() {
return fmt.Errorf("Cannot link to a non running container: %s AS %s", child.Name, linkAlias)
}
link, err := links.NewLink(
container.NetworkSettings.IPAddress,
child.NetworkSettings.IPAddress,
linkAlias,
child.Config.Env,
child.Config.ExposedPorts,
runtime.eng)
if err != nil { if err != nil {
rollback() rollback()
return err return err
@ -721,62 +578,10 @@ func (container *Container) Start() (err error) {
return err return err
} }
// Setup the root fs as a bind mount of the base fs if err := mountVolumesForContainer(container, envPath); err != nil {
root := container.RootfsPath()
if err := os.MkdirAll(root, 0755); err != nil && !os.IsExist(err) {
return nil
}
// Create a bind mount of the base fs as a place where we can add mounts
// without affecting the ability to access the base fs
if err := mount.Mount(container.basefs, root, "none", "bind,rw"); err != nil {
return err return err
} }
// Make sure the root fs is private so the mounts here don't propagate to basefs
if err := mount.ForceMount(root, root, "none", "private"); err != nil {
return err
}
// Mount docker specific files into the containers root fs
if err := mount.Mount(runtime.sysInitPath, path.Join(root, "/.dockerinit"), "none", "bind,ro"); err != nil {
return err
}
if err := mount.Mount(envPath, path.Join(root, "/.dockerenv"), "none", "bind,ro"); err != nil {
return err
}
if err := mount.Mount(container.ResolvConfPath, path.Join(root, "/etc/resolv.conf"), "none", "bind,ro"); err != nil {
return err
}
if container.HostnamePath != "" && container.HostsPath != "" {
if err := mount.Mount(container.HostnamePath, path.Join(root, "/etc/hostname"), "none", "bind,ro"); err != nil {
return err
}
if err := mount.Mount(container.HostsPath, path.Join(root, "/etc/hosts"), "none", "bind,ro"); err != nil {
return err
}
}
// Mount user specified volumes
for r, v := range container.Volumes {
mountAs := "ro"
if container.VolumesRW[r] {
mountAs = "rw"
}
r = path.Join(root, r)
if p, err := utils.FollowSymlinkInScope(r, root); err != nil {
return err
} else {
r = p
}
if err := mount.Mount(v, r, "none", fmt.Sprintf("bind,%s", mountAs)); err != nil {
return err
}
}
populateCommand(container) populateCommand(container)
// Setup logging of stdout and stderr to disk // Setup logging of stdout and stderr to disk
@ -829,205 +634,6 @@ func (container *Container) Start() (err error) {
return nil return nil
} }
func (container *Container) getBindMap() (map[string]BindMap, error) {
// Create the requested bind mounts
binds := make(map[string]BindMap)
// Define illegal container destinations
illegalDsts := []string{"/", "."}
for _, bind := range container.hostConfig.Binds {
// FIXME: factorize bind parsing in parseBind
var src, dst, mode string
arr := strings.Split(bind, ":")
if len(arr) == 2 {
src = arr[0]
dst = arr[1]
mode = "rw"
} else if len(arr) == 3 {
src = arr[0]
dst = arr[1]
mode = arr[2]
} else {
return nil, fmt.Errorf("Invalid bind specification: %s", bind)
}
// Bail if trying to mount to an illegal destination
for _, illegal := range illegalDsts {
if dst == illegal {
return nil, fmt.Errorf("Illegal bind destination: %s", dst)
}
}
bindMap := BindMap{
SrcPath: src,
DstPath: dst,
Mode: mode,
}
binds[path.Clean(dst)] = bindMap
}
return binds, nil
}
func (container *Container) createVolumes() error {
binds, err := container.getBindMap()
if err != nil {
return err
}
volumesDriver := container.runtime.volumes.driver
// Create the requested volumes if they don't exist
for volPath := range container.Config.Volumes {
volPath = path.Clean(volPath)
volIsDir := true
// Skip existing volumes
if _, exists := container.Volumes[volPath]; exists {
continue
}
var srcPath string
var isBindMount bool
srcRW := false
// If an external bind is defined for this volume, use that as a source
if bindMap, exists := binds[volPath]; exists {
isBindMount = true
srcPath = bindMap.SrcPath
if strings.ToLower(bindMap.Mode) == "rw" {
srcRW = true
}
if stat, err := os.Stat(bindMap.SrcPath); err != nil {
return err
} else {
volIsDir = stat.IsDir()
}
// Otherwise create an directory in $ROOT/volumes/ and use that
} else {
// Do not pass a container as the parameter for the volume creation.
// The graph driver using the container's information ( Image ) to
// create the parent.
c, err := container.runtime.volumes.Create(nil, nil, "", "", nil)
if err != nil {
return err
}
srcPath, err = volumesDriver.Get(c.ID)
if err != nil {
return fmt.Errorf("Driver %s failed to get volume rootfs %s: %s", volumesDriver, c.ID, err)
}
srcRW = true // RW by default
}
if p, err := filepath.EvalSymlinks(srcPath); err != nil {
return err
} else {
srcPath = p
}
container.Volumes[volPath] = srcPath
container.VolumesRW[volPath] = srcRW
// Create the mountpoint
volPath = path.Join(container.basefs, volPath)
rootVolPath, err := utils.FollowSymlinkInScope(volPath, container.basefs)
if err != nil {
return err
}
if _, err := os.Stat(rootVolPath); err != nil {
if os.IsNotExist(err) {
if volIsDir {
if err := os.MkdirAll(rootVolPath, 0755); err != nil {
return err
}
} else {
if err := os.MkdirAll(path.Dir(rootVolPath), 0755); err != nil {
return err
}
if f, err := os.OpenFile(rootVolPath, os.O_CREATE, 0755); err != nil {
return err
} else {
f.Close()
}
}
}
}
// Do not copy or change permissions if we are mounting from the host
if srcRW && !isBindMount {
volList, err := ioutil.ReadDir(rootVolPath)
if err != nil {
return err
}
if len(volList) > 0 {
srcList, err := ioutil.ReadDir(srcPath)
if err != nil {
return err
}
if len(srcList) == 0 {
// If the source volume is empty copy files from the root into the volume
if err := archive.CopyWithTar(rootVolPath, srcPath); err != nil {
return err
}
var stat syscall.Stat_t
if err := syscall.Stat(rootVolPath, &stat); err != nil {
return err
}
var srcStat syscall.Stat_t
if err := syscall.Stat(srcPath, &srcStat); err != nil {
return err
}
// Change the source volume's ownership if it differs from the root
// files that were just copied
if stat.Uid != srcStat.Uid || stat.Gid != srcStat.Gid {
if err := os.Chown(srcPath, int(stat.Uid), int(stat.Gid)); err != nil {
return err
}
}
}
}
}
}
return nil
}
func (container *Container) applyExternalVolumes() error {
if container.Config.VolumesFrom != "" {
containerSpecs := strings.Split(container.Config.VolumesFrom, ",")
for _, containerSpec := range containerSpecs {
mountRW := true
specParts := strings.SplitN(containerSpec, ":", 2)
switch len(specParts) {
case 0:
return fmt.Errorf("Malformed volumes-from specification: %s", container.Config.VolumesFrom)
case 2:
switch specParts[1] {
case "ro":
mountRW = false
case "rw": // mountRW is already true
default:
return fmt.Errorf("Malformed volumes-from specification: %s", containerSpec)
}
}
c := container.runtime.Get(specParts[0])
if c == nil {
return fmt.Errorf("Container %s not found. Impossible to mount its volumes", container.ID)
}
for volPath, id := range c.Volumes {
if _, exists := container.Volumes[volPath]; exists {
continue
}
if err := os.MkdirAll(path.Join(container.basefs, volPath), 0755); err != nil {
return err
}
container.Volumes[volPath] = id
if isRW, exists := c.VolumesRW[volPath]; exists {
container.VolumesRW[volPath] = isRW && mountRW
}
}
}
}
return nil
}
func (container *Container) Run() error { func (container *Container) Run() error {
if err := container.Start(); err != nil { if err := container.Start(); err != nil {
return err return err
@ -1152,8 +758,8 @@ func (container *Container) allocateNetwork() error {
} }
var ( var (
portSpecs = make(map[Port]struct{}) portSpecs = make(nat.PortSet)
bindings = make(map[Port][]PortBinding) bindings = make(nat.PortMap)
) )
if !container.State.IsGhost() { if !container.State.IsGhost() {
@ -1177,7 +783,7 @@ func (container *Container) allocateNetwork() error {
for port := range portSpecs { for port := range portSpecs {
binding := bindings[port] binding := bindings[port]
if container.hostConfig.PublishAllPorts && len(binding) == 0 { if container.hostConfig.PublishAllPorts && len(binding) == 0 {
binding = append(binding, PortBinding{}) binding = append(binding, nat.PortBinding{})
} }
for i := 0; i < len(binding); i++ { for i := 0; i < len(binding); i++ {
@ -1300,29 +906,7 @@ func (container *Container) cleanup() {
} }
} }
var ( unmountVolumesForContainer(container)
root = container.RootfsPath()
mounts = []string{
root,
path.Join(root, "/.dockerinit"),
path.Join(root, "/.dockerenv"),
path.Join(root, "/etc/resolv.conf"),
}
)
if container.HostnamePath != "" && container.HostsPath != "" {
mounts = append(mounts, path.Join(root, "/etc/hostname"), path.Join(root, "/etc/hosts"))
}
for r := range container.Volumes {
mounts = append(mounts, path.Join(root, r))
}
for i := len(mounts) - 1; i >= 0; i-- {
if lastError := mount.Unmount(mounts[i]); lastError != nil {
log.Printf("Failed to umount %v: %v", mounts[i], lastError)
}
}
if err := container.Unmount(); err != nil { if err := container.Unmount(); err != nil {
log.Printf("%v: Failed to umount filesystem: %v", container.ID, err) log.Printf("%v: Failed to umount filesystem: %v", container.ID, err)
@ -1390,6 +974,13 @@ func (container *Container) Stop(seconds int) error {
} }
func (container *Container) Restart(seconds int) error { func (container *Container) Restart(seconds int) error {
// Avoid unnecessarily unmounting and then directly mounting
// the container when the container stops and then starts
// again
if err := container.Mount(); err == nil {
defer container.Unmount()
}
if err := container.Stop(seconds); err != nil { if err := container.Stop(seconds); err != nil {
return err return err
} }
@ -1422,7 +1013,11 @@ func (container *Container) ExportRw() (archive.Archive, error) {
container.Unmount() container.Unmount()
return nil, err return nil, err
} }
return EofReader(archive, func() { container.Unmount() }), nil return utils.NewReadCloserWrapper(archive, func() error {
err := archive.Close()
container.Unmount()
return err
}), nil
} }
func (container *Container) Export() (archive.Archive, error) { func (container *Container) Export() (archive.Archive, error) {
@ -1435,7 +1030,11 @@ func (container *Container) Export() (archive.Archive, error) {
container.Unmount() container.Unmount()
return nil, err return nil, err
} }
return EofReader(archive, func() { container.Unmount() }), nil return utils.NewReadCloserWrapper(archive, func() error {
err := archive.Close()
container.Unmount()
return err
}), nil
} }
func (container *Container) WaitTimeout(timeout time.Duration) error { func (container *Container) WaitTimeout(timeout time.Duration) error {
@ -1562,7 +1161,7 @@ func (container *Container) GetSize() (int64, int64) {
return sizeRw, sizeRootfs return sizeRw, sizeRootfs
} }
func (container *Container) Copy(resource string) (archive.Archive, error) { func (container *Container) Copy(resource string) (io.ReadCloser, error) {
if err := container.Mount(); err != nil { if err := container.Mount(); err != nil {
return nil, err return nil, err
} }
@ -1589,11 +1188,15 @@ func (container *Container) Copy(resource string) (archive.Archive, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
return EofReader(archive, func() { container.Unmount() }), nil return utils.NewReadCloserWrapper(archive, func() error {
err := archive.Close()
container.Unmount()
return err
}), nil
} }
// Returns true if the container exposes a certain port // Returns true if the container exposes a certain port
func (container *Container) Exposes(p Port) bool { func (container *Container) Exposes(p nat.Port) bool {
_, exists := container.Config.ExposedPorts[p] _, exists := container.Config.ExposedPorts[p]
return exists return exists
} }

View File

@ -1,28 +1,12 @@
package docker package docker
import ( import (
"github.com/dotcloud/docker/nat"
"testing" "testing"
) )
func TestParseLxcConfOpt(t *testing.T) {
opts := []string{"lxc.utsname=docker", "lxc.utsname = docker "}
for _, o := range opts {
k, v, err := parseLxcOpt(o)
if err != nil {
t.FailNow()
}
if k != "lxc.utsname" {
t.Fail()
}
if v != "docker" {
t.Fail()
}
}
}
func TestParseNetworkOptsPrivateOnly(t *testing.T) { func TestParseNetworkOptsPrivateOnly(t *testing.T) {
ports, bindings, err := parsePortSpecs([]string{"192.168.1.100::80"}) ports, bindings, err := nat.ParsePortSpecs([]string{"192.168.1.100::80"})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -64,7 +48,7 @@ func TestParseNetworkOptsPrivateOnly(t *testing.T) {
} }
func TestParseNetworkOptsPublic(t *testing.T) { func TestParseNetworkOptsPublic(t *testing.T) {
ports, bindings, err := parsePortSpecs([]string{"192.168.1.100:8080:80"}) ports, bindings, err := nat.ParsePortSpecs([]string{"192.168.1.100:8080:80"})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -106,7 +90,7 @@ func TestParseNetworkOptsPublic(t *testing.T) {
} }
func TestParseNetworkOptsUdp(t *testing.T) { func TestParseNetworkOptsUdp(t *testing.T) {
ports, bindings, err := parsePortSpecs([]string{"192.168.1.100::6000/udp"}) ports, bindings, err := nat.ParsePortSpecs([]string{"192.168.1.100::6000/udp"})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@ -8,4 +8,4 @@ Examples
======== ========
* Data container: ./data/Dockerfile creates a data image sharing /data volume * Data container: ./data/Dockerfile creates a data image sharing /data volume
* Firefox: ./firefox/Dockerfile shows a way to dockerize a common multimedia application * Iceweasel: ./iceweasel/Dockerfile shows a way to dockerize a common multimedia application

View File

@ -11,28 +11,28 @@
# # Build data image # # Build data image
# docker build -t data -rm . # docker build -t data -rm .
# #
# # Create a data container. (eg: firefox-data) # # Create a data container. (eg: iceweasel-data)
# docker run -name firefox-data data true # docker run -name iceweasel-data data true
# #
# # List data from it # # List data from it
# docker run -volumes-from firefox-data busybox ls -al /data # docker run -volumes-from iceweasel-data busybox ls -al /data
docker-version 0.6.5 docker-version 0.6.5
# Smallest base image, just to launch a container # Smallest base image, just to launch a container
from busybox FROM busybox
maintainer Daniel Mizyrycki <daniel@docker.com> MAINTAINER Daniel Mizyrycki <daniel@docker.com>
# Create a regular user # Create a regular user
run echo 'sysadmin:x:1000:1000::/data:/bin/sh' >> /etc/passwd RUN echo 'sysadmin:x:1000:1000::/data:/bin/sh' >> /etc/passwd
run echo 'sysadmin:x:1000:' >> /etc/group RUN echo 'sysadmin:x:1000:' >> /etc/group
# Create directory for that user # Create directory for that user
run mkdir /data RUN mkdir /data
run chown sysadmin.sysadmin /data RUN chown sysadmin.sysadmin /data
# Add content to /data. This will keep sysadmin ownership # Add content to /data. This will keep sysadmin ownership
run touch /data/init_volume RUN touch /data/init_volume
# Create /data volume # Create /data volume
VOLUME /data VOLUME /data

View File

@ -1,49 +0,0 @@
# VERSION: 0.7
# DESCRIPTION: Create firefox container with its dependencies
# AUTHOR: Daniel Mizyrycki <daniel@dotcloud.com>
# COMMENTS:
# This file describes how to build a Firefox container with all
# dependencies installed. It uses native X11 unix socket and alsa
# sound devices. Tested on Debian 7.2
# USAGE:
# # Download Firefox Dockerfile
# wget http://raw.github.com/dotcloud/docker/master/contrib/desktop-integration/firefox/Dockerfile
#
# # Build firefox image
# docker build -t firefox -rm .
#
# # Run stateful data-on-host firefox. For ephemeral, remove -v /data/firefox:/data
# docker run -v /data/firefox:/data -v /tmp/.X11-unix:/tmp/.X11-unix \
# -v /dev/snd:/dev/snd -lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
# -e DISPLAY=unix$DISPLAY firefox
#
# # To run stateful dockerized data containers
# docker run -volumes-from firefox-data -v /tmp/.X11-unix:/tmp/.X11-unix \
# -v /dev/snd:/dev/snd -lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
# -e DISPLAY=unix$DISPLAY firefox
docker-version 0.6.5
# Base docker image
from tianon/debian:wheezy
maintainer Daniel Mizyrycki <daniel@docker.com>
# Install firefox dependencies
run echo "deb http://ftp.debian.org/debian/ wheezy main contrib" > /etc/apt/sources.list
run apt-get update
run DEBIAN_FRONTEND=noninteractive apt-get install -y libXrender1 libasound2 \
libdbus-glib-1-2 libgtk2.0-0 libpango1.0-0 libxt6 wget bzip2 sudo
# Install Firefox
run mkdir /application
run cd /application; wget -O - \
http://ftp.mozilla.org/pub/mozilla.org/firefox/releases/25.0/linux-x86_64/en-US/firefox-25.0.tar.bz2 | tar jx
# create sysadmin account
run useradd -m -d /data -p saIVpsc0EVTwA sysadmin
run sed -Ei 's/sudo:x:27:/sudo:x:27:sysadmin/' /etc/group
run sed -Ei 's/(\%sudo\s+ALL=\(ALL\:ALL\) )ALL/\1 NOPASSWD:ALL/' /etc/sudoers
# Autorun firefox. -no-remote is necessary to create a new container, as firefox
# appears to communicate with itself through X11.
cmd ["/bin/sh", "-c", "/usr/bin/sudo -u sysadmin -H -E /application/firefox/firefox -no-remote"]

View File

@ -0,0 +1,41 @@
# VERSION: 0.7
# DESCRIPTION: Create iceweasel container with its dependencies
# AUTHOR: Daniel Mizyrycki <daniel@dotcloud.com>
# COMMENTS:
# This file describes how to build a Iceweasel container with all
# dependencies installed. It uses native X11 unix socket and alsa
# sound devices. Tested on Debian 7.2
# USAGE:
# # Download Iceweasel Dockerfile
# wget http://raw.github.com/dotcloud/docker/master/contrib/desktop-integration/iceweasel/Dockerfile
#
# # Build iceweasel image
# docker build -t iceweasel -rm .
#
# # Run stateful data-on-host iceweasel. For ephemeral, remove -v /data/iceweasel:/data
# docker run -v /data/iceweasel:/data -v /tmp/.X11-unix:/tmp/.X11-unix \
# -v /dev/snd:/dev/snd -lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
# -e DISPLAY=unix$DISPLAY iceweasel
#
# # To run stateful dockerized data containers
# docker run -volumes-from iceweasel-data -v /tmp/.X11-unix:/tmp/.X11-unix \
# -v /dev/snd:/dev/snd -lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
# -e DISPLAY=unix$DISPLAY iceweasel
docker-version 0.6.5
# Base docker image
FROM debian:wheezy
MAINTAINER Daniel Mizyrycki <daniel@docker.com>
# Install Iceweasel and "sudo"
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq iceweasel sudo
# create sysadmin account
RUN useradd -m -d /data -p saIVpsc0EVTwA sysadmin
RUN sed -Ei 's/sudo:x:27:/sudo:x:27:sysadmin/' /etc/group
RUN sed -Ei 's/(\%sudo\s+ALL=\(ALL\:ALL\) )ALL/\1 NOPASSWD:ALL/' /etc/sudoers
# Autorun iceweasel. -no-remote is necessary to create a new container, as
# iceweasel appears to communicate with itself through X11.
CMD ["/usr/bin/sudo", "-u", "sysadmin", "-H", "-E", "/usr/bin/iceweasel", "-no-remote"]

View File

@ -0,0 +1,123 @@
#!/bin/sh
#
# /etc/rc.d/init.d/docker
#
# Daemon for docker.io
#
# chkconfig: 2345 95 95
# description: Daemon for docker.io
### BEGIN INIT INFO
# Provides: docker
# Required-Start: $network cgconfig
# Required-Stop:
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop docker
# Description: Daemon for docker.io
### END INIT INFO
# Source function library.
. /etc/rc.d/init.d/functions
prog="docker"
exec="/usr/bin/$prog"
pidfile="/var/run/$prog.pid"
lockfile="/var/lock/subsys/$prog"
logfile="/var/log/$prog"
[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
prestart() {
service cgconfig status > /dev/null
if [[ $? != 0 ]]; then
service cgconfig start
fi
}
start() {
[ -x $exec ] || exit 5
if ! [ -f $pidfile ]; then
prestart
printf "Starting $prog:\t"
echo "\n$(date)\n" >> $logfile
$exec -d $other_args &>> $logfile &
pid=$!
touch $lockfile
success
echo
else
failure
echo
printf "$pidfile still exists...\n"
exit 7
fi
}
stop() {
echo -n $"Stopping $prog: "
killproc -p $pidfile $prog
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
stop
start
}
reload() {
restart
}
force_reload() {
restart
}
rh_status() {
status -p $pidfile $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
restart
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
exit 2
esac
exit $?

View File

@ -0,0 +1,7 @@
# /etc/sysconfig/docker
#
# Other arguments to pass to the docker daemon process
# These will be parsed by the sysv initscript and appended
# to the arguments list passed to docker -d
other_args=""

View File

@ -51,7 +51,7 @@ done
yum -c "$yum_config" --installroot="$target" --setopt=tsflags=nodocs \ yum -c "$yum_config" --installroot="$target" --setopt=tsflags=nodocs \
--setopt=group_package_types=mandatory -y groupinstall Core --setopt=group_package_types=mandatory -y groupinstall Core
yum -c "$yum_config" --installroot="$mount" -y clean all yum -c "$yum_config" --installroot="$target" -y clean all
cat > "$target"/etc/sysconfig/network <<EOF cat > "$target"/etc/sysconfig/network <<EOF
NETWORKING=yes NETWORKING=yes

View File

@ -0,0 +1,24 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>name</key>
<string>Comments</string>
<key>scope</key>
<string>source.dockerfile</string>
<key>settings</key>
<dict>
<key>shellVariables</key>
<array>
<dict>
<key>name</key>
<string>TM_COMMENT_START</string>
<key>value</key>
<string># </string>
</dict>
</array>
</dict>
<key>uuid</key>
<string>2B215AC0-A7F3-4090-9FF6-F4842BD56CA7</string>
</dict>
</plist>

View File

@ -12,16 +12,38 @@
<array> <array>
<dict> <dict>
<key>match</key> <key>match</key>
<string>^\s*(FROM|MAINTAINER|RUN|CMD|EXPOSE|ENV|ADD)\s</string> <string>^\s*(ONBUILD\s+)?(FROM|MAINTAINER|RUN|EXPOSE|ENV|ADD|VOLUME|USER|WORKDIR)\s</string>
<key>captures</key>
<dict>
<key>0</key>
<dict>
<key>name</key> <key>name</key>
<string>keyword.control.dockerfile</string> <string>keyword.control.dockerfile</string>
</dict> </dict>
<key>1</key>
<dict>
<key>name</key>
<string>keyword.other.special-method.dockerfile</string>
</dict>
</dict>
</dict>
<dict> <dict>
<key>match</key> <key>match</key>
<string>^\s*(ENTRYPOINT|VOLUME|USER|WORKDIR)\s</string> <string>^\s*(ONBUILD\s+)?(CMD|ENTRYPOINT)\s</string>
<key>captures</key>
<dict>
<key>0</key>
<dict>
<key>name</key> <key>name</key>
<string>keyword.operator.dockerfile</string> <string>keyword.operator.dockerfile</string>
</dict> </dict>
<key>1</key>
<dict>
<key>name</key>
<string>keyword.other.special-method.dockerfile</string>
</dict>
</dict>
</dict>
<dict> <dict>
<key>begin</key> <key>begin</key>
<string>"</string> <string>"</string>
@ -39,6 +61,23 @@
</dict> </dict>
</array> </array>
</dict> </dict>
<dict>
<key>begin</key>
<string>'</string>
<key>end</key>
<string>'</string>
<key>name</key>
<string>string.quoted.single.dockerfile</string>
<key>patterns</key>
<array>
<dict>
<key>match</key>
<string>\\.</string>
<key>name</key>
<string>constant.character.escaped.dockerfile</string>
</dict>
</array>
</dict>
<dict> <dict>
<key>match</key> <key>match</key>
<string>^\s*#.*$</string> <string>^\s*#.*$</string>

View File

@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>contactEmailRot13</key>
<string>germ@andz.com.ar</string>
<key>contactName</key>
<string>GermanDZ</string>
<key>description</key>
<string>Helpers for Docker.</string>
<key>name</key>
<string>Docker</string>
<key>uuid</key>
<string>8B9DDBAF-E65C-4E12-FFA7-467D4AA535B1</string>
</dict>
</plist>

View File

@ -1,23 +0,0 @@
# [PackageDev] target_format: plist, ext: tmLanguage
---
name: Dockerfile
scopeName: source.dockerfile
uuid: a39d8795-59d2-49af-aa00-fe74ee29576e
patterns:
# Keywords
- name: keyword.control.dockerfile
match: ^\s*(FROM|MAINTAINER|RUN|CMD|EXPOSE|ENV|ADD)\s
- name: keyword.operator.dockerfile
match: ^\s*(ENTRYPOINT|VOLUME|USER|WORKDIR)\s
# String
- name: string.quoted.double.dockerfile
begin: "\""
end: "\""
patterns:
- name: constant.character.escaped.dockerfile
match: \\.
# Comment
- name: comment.block.dockerfile
match: ^\s*#.*$
...

View File

@ -1,9 +1,16 @@
# Dockerfile.tmLanguage # Docker.tmbundle
Pretty basic Dockerfile.tmLanguage for Sublime Text syntax highlighting. Dockerfile syntaxt highlighting for TextMate and Sublime Text.
PR's with syntax updates, suggestions etc. are all very much appreciated! ## Install
I'll get to making this installable via Package Control soon! ### Sublime Text
Available for Sublime Text under [package control](https://sublime.wbond.net/packages/Dockerfile%20Syntax%20Highlighting).
Search for *Dockerfile Syntax Highlighting*
### TextMate 2
Copy the directory `Docker.tmbundle` (showed as a Package in OSX) to `~/Library/Application Support/TextMate/Managed/Bundles`
enjoy. enjoy.

View File

@ -11,8 +11,7 @@ let b:current_syntax = "dockerfile"
syntax case ignore syntax case ignore
syntax match dockerfileKeyword /\v^\s*(FROM|MAINTAINER|RUN|CMD|EXPOSE|ENV|ADD)\s/ syntax match dockerfileKeyword /\v^\s*(ONBUILD\s+)?(ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|VOLUME|WORKDIR)\s/
syntax match dockerfileKeyword /\v^\s*(ENTRYPOINT|VOLUME|USER|WORKDIR)\s/
highlight link dockerfileKeyword Keyword highlight link dockerfileKeyword Keyword
syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/ syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/

View File

@ -6,19 +6,16 @@ import (
"os" "os"
"strings" "strings"
"github.com/dotcloud/docker" _ "github.com/dotcloud/docker"
"github.com/dotcloud/docker/api" "github.com/dotcloud/docker/api"
"github.com/dotcloud/docker/dockerversion"
"github.com/dotcloud/docker/engine" "github.com/dotcloud/docker/engine"
flag "github.com/dotcloud/docker/pkg/mflag" flag "github.com/dotcloud/docker/pkg/mflag"
"github.com/dotcloud/docker/pkg/opts"
"github.com/dotcloud/docker/sysinit" "github.com/dotcloud/docker/sysinit"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
) )
var (
GITCOMMIT string
VERSION string
)
func main() { func main() {
if selfPath := utils.SelfPath(); selfPath == "/sbin/init" || selfPath == "/.dockerinit" { if selfPath := utils.SelfPath(); selfPath == "/sbin/init" || selfPath == "/.dockerinit" {
// Running in init mode // Running in init mode
@ -36,13 +33,13 @@ func main() {
pidfile = flag.String([]string{"p", "-pidfile"}, "/var/run/docker.pid", "Path to use for daemon PID file") pidfile = flag.String([]string{"p", "-pidfile"}, "/var/run/docker.pid", "Path to use for daemon PID file")
flRoot = flag.String([]string{"g", "-graph"}, "/var/lib/docker", "Path to use as the root of the docker runtime") flRoot = flag.String([]string{"g", "-graph"}, "/var/lib/docker", "Path to use as the root of the docker runtime")
flEnableCors = flag.Bool([]string{"#api-enable-cors", "-api-enable-cors"}, false, "Enable CORS headers in the remote API") flEnableCors = flag.Bool([]string{"#api-enable-cors", "-api-enable-cors"}, false, "Enable CORS headers in the remote API")
flDns = docker.NewListOpts(docker.ValidateIp4Address) flDns = opts.NewListOpts(opts.ValidateIp4Address)
flEnableIptables = flag.Bool([]string{"#iptables", "-iptables"}, true, "Disable docker's addition of iptables rules") flEnableIptables = flag.Bool([]string{"#iptables", "-iptables"}, true, "Disable docker's addition of iptables rules")
flEnableIpForward = flag.Bool([]string{"#ip-forward", "-ip-forward"}, true, "Disable enabling of net.ipv4.ip_forward") flEnableIpForward = flag.Bool([]string{"#ip-forward", "-ip-forward"}, true, "Disable enabling of net.ipv4.ip_forward")
flDefaultIp = flag.String([]string{"#ip", "-ip"}, "0.0.0.0", "Default IP address to use when binding container ports") flDefaultIp = flag.String([]string{"#ip", "-ip"}, "0.0.0.0", "Default IP address to use when binding container ports")
flInterContainerComm = flag.Bool([]string{"#icc", "-icc"}, true, "Enable inter-container communication") flInterContainerComm = flag.Bool([]string{"#icc", "-icc"}, true, "Enable inter-container communication")
flGraphDriver = flag.String([]string{"s", "-storage-driver"}, "", "Force the docker runtime to use a specific storage driver") flGraphDriver = flag.String([]string{"s", "-storage-driver"}, "", "Force the docker runtime to use a specific storage driver")
flHosts = docker.NewListOpts(docker.ValidateHost) flHosts = opts.NewListOpts(api.ValidateHost)
flMtu = flag.Int([]string{"#mtu", "-mtu"}, 0, "Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if not default route is available") flMtu = flag.Int([]string{"#mtu", "-mtu"}, 0, "Set the containers network MTU; if no value is provided: default to the default route MTU or 1500 if not default route is available")
) )
flag.Var(&flDns, []string{"#dns", "-dns"}, "Force docker to use specific DNS servers") flag.Var(&flDns, []string{"#dns", "-dns"}, "Force docker to use specific DNS servers")
@ -61,6 +58,9 @@ func main() {
// If we do not have a host, default to unix socket // If we do not have a host, default to unix socket
defaultHost = fmt.Sprintf("unix://%s", api.DEFAULTUNIXSOCKET) defaultHost = fmt.Sprintf("unix://%s", api.DEFAULTUNIXSOCKET)
} }
if _, err := api.ValidateHost(defaultHost); err != nil {
log.Fatal(err)
}
flHosts.Set(defaultHost) flHosts.Set(defaultHost)
} }
@ -71,8 +71,6 @@ func main() {
if *flDebug { if *flDebug {
os.Setenv("DEBUG", "1") os.Setenv("DEBUG", "1")
} }
docker.GITCOMMIT = GITCOMMIT
docker.VERSION = VERSION
if *flDaemon { if *flDaemon {
if flag.NArg() != 0 { if flag.NArg() != 0 {
flag.Usage() flag.Usage()
@ -83,6 +81,10 @@ func main() {
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }
// load the daemon in the background so we can immediately start
// the http api so that connections don't fail while the daemon
// is booting
go func() {
// Load plugin: httpapi // Load plugin: httpapi
job := eng.Job("initserver") job := eng.Job("initserver")
job.Setenv("Pidfile", *pidfile) job.Setenv("Pidfile", *pidfile)
@ -100,11 +102,18 @@ func main() {
if err := job.Run(); err != nil { if err := job.Run(); err != nil {
log.Fatal(err) log.Fatal(err)
} }
// after the daemon is done setting up we can tell the api to start
// accepting connections
if err := eng.Job("acceptconnections").Run(); err != nil {
log.Fatal(err)
}
}()
// Serve api // Serve api
job = eng.Job("serveapi", flHosts.GetAll()...) job := eng.Job("serveapi", flHosts.GetAll()...)
job.SetenvBool("Logging", true) job.SetenvBool("Logging", true)
job.SetenvBool("EnableCors", *flEnableCors) job.SetenvBool("EnableCors", *flEnableCors)
job.Setenv("Version", VERSION) job.Setenv("Version", dockerversion.VERSION)
if err := job.Run(); err != nil { if err := job.Run(); err != nil {
log.Fatal(err) log.Fatal(err)
} }
@ -113,7 +122,7 @@ func main() {
log.Fatal("Please specify only one -H") log.Fatal("Please specify only one -H")
} }
protoAddrParts := strings.SplitN(flHosts.GetAll()[0], "://", 2) protoAddrParts := strings.SplitN(flHosts.GetAll()[0], "://", 2)
if err := docker.ParseCommands(protoAddrParts[0], protoAddrParts[1], flag.Args()...); err != nil { if err := api.ParseCommands(protoAddrParts[0], protoAddrParts[1], flag.Args()...); err != nil {
if sterr, ok := err.(*utils.StatusError); ok { if sterr, ok := err.(*utils.StatusError); ok {
if sterr.Status != "" { if sterr.Status != "" {
log.Println(sterr.Status) log.Println(sterr.Status)
@ -126,5 +135,5 @@ func main() {
} }
func showVersion() { func showVersion() {
fmt.Printf("Docker version %s, build %s\n", VERSION, GITCOMMIT) fmt.Printf("Docker version %s, build %s\n", dockerversion.VERSION, dockerversion.GITCOMMIT)
} }

View File

@ -4,11 +4,6 @@ import (
"github.com/dotcloud/docker/sysinit" "github.com/dotcloud/docker/sysinit"
) )
var (
GITCOMMIT string
VERSION string
)
func main() { func main() {
// Running in init mode // Running in init mode
sysinit.SysInit() sysinit.SysInit()

View File

@ -0,0 +1,15 @@
package dockerversion
// FIXME: this should be embedded in the docker/docker.go,
// but we can't because distro policy requires us to
// package a separate dockerinit binary, and that binary needs
// to know its version too.
var (
GITCOMMIT string
VERSION string
IAMSTATIC bool // whether or not Docker itself was compiled statically via ./hack/make.sh binary
INITSHA1 string // sha1sum of separate static dockerinit, if Docker itself was compiled dynamically via ./hack/make.sh dynbinary
INITPATH string // custom location to search for a valid dockerinit binary (available for packagers as a last resort escape hatch)
)

View File

@ -24,7 +24,17 @@ a working, up-to-date docker installation, then continue to the next
step. step.
Step 2: Check out the Source Step 2: Install tools used for this tutorial
--------------------------------------------
Install ``git``; honest, it's very good. You can use other ways to get the Docker
source, but they're not anywhere near as easy.
Install ``make``. This tutorial uses our base Makefile to kick off the docker
containers in a repeatable and consistent way. Again, you can do it in other ways
but you need to do more work.
Step 3: Check out the Source
---------------------------- ----------------------------
.. code-block:: bash .. code-block:: bash
@ -35,7 +45,7 @@ Step 2: Check out the Source
To checkout a different revision just use ``git checkout`` with the name of branch or revision number. To checkout a different revision just use ``git checkout`` with the name of branch or revision number.
Step 3: Build the Environment Step 4: Build the Environment
----------------------------- -----------------------------
This following command will build a development environment using the Dockerfile in the current directory. Essentially, it will install all the build and runtime dependencies necessary to build and test Docker. This command will take some time to complete when you first execute it. This following command will build a development environment using the Dockerfile in the current directory. Essentially, it will install all the build and runtime dependencies necessary to build and test Docker. This command will take some time to complete when you first execute it.
@ -48,7 +58,7 @@ If the build is successful, congratulations! You have produced a clean build of
docker, neatly encapsulated in a standard build environment. docker, neatly encapsulated in a standard build environment.
Step 4: Build the Docker Binary Step 5: Build the Docker Binary
------------------------------- -------------------------------
To create the Docker binary, run this command: To create the Docker binary, run this command:

View File

@ -0,0 +1,53 @@
#
# example Dockerfile for http://docs.docker.io/en/latest/examples/postgresql_service/
#
FROM ubuntu
MAINTAINER SvenDowideit@docker.com
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
# of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
# Update the Ubuntu and PostgreSQL repository indexes
RUN apt-get update
# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get -y -q install python-software-properties software-properties-common
RUN apt-get -y -q install postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Create a PostgreSQL role named ``docker`` with ``docker`` as the password and
# then create a database `docker` owned by the ``docker`` role.
# Note: here we use ``&&\`` to run commands one after the other - the ``\``
# allows the RUN command to span multiple lines.
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
createdb -O docker docker
# Adjust PostgreSQL configuration so that remote connections to the
# database are possible.
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
# And add ``listen_addresses`` to ``/etc/postgresql/9.3/main/postgresql.conf``
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Expose the PostgreSQL port
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]

View File

@ -9,152 +9,109 @@ PostgreSQL Service
.. include:: example_header.inc .. include:: example_header.inc
.. note::
A shorter version of `this blog post`_.
.. _this blog post: http://zaiste.net/2013/08/docker_postgresql_how_to/
Installing PostgreSQL on Docker Installing PostgreSQL on Docker
------------------------------- -------------------------------
Run an interactive shell in a Docker container. Assuming there is no Docker image that suits your needs in `the index`_, you
can create one yourself.
.. code-block:: bash .. _the index: http://index.docker.io
sudo docker run -i -t ubuntu /bin/bash Start by creating a new Dockerfile:
Update its dependencies.
.. code-block:: bash
apt-get update
Install ``python-software-properties``, ``software-properties-common``, ``wget`` and ``vim``.
.. code-block:: bash
apt-get -y install python-software-properties software-properties-common wget vim
Add PostgreSQL's repository. It contains the most recent stable release
of PostgreSQL, ``9.3``.
.. code-block:: bash
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
apt-get update
Finally, install PostgreSQL 9.3
.. code-block:: bash
apt-get -y install postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
Now, create a PostgreSQL superuser role that can create databases and
other roles. Following Vagrant's convention the role will be named
``docker`` with ``docker`` password assigned to it.
.. code-block:: bash
su postgres -c "createuser -P -d -r -s docker"
Create a test database also named ``docker`` owned by previously created ``docker``
role.
.. code-block:: bash
su postgres -c "createdb -O docker docker"
Adjust PostgreSQL configuration so that remote connections to the
database are possible. Make sure that inside
``/etc/postgresql/9.3/main/pg_hba.conf`` you have following line:
.. code-block:: bash
host all all 0.0.0.0/0 md5
Additionaly, inside ``/etc/postgresql/9.3/main/postgresql.conf``
uncomment ``listen_addresses`` like so:
.. code-block:: bash
listen_addresses='*'
.. note:: .. note::
This PostgreSQL setup is for development only purposes. Refer This PostgreSQL setup is for development only purposes. Refer
to PostgreSQL documentation how to fine-tune these settings so that it to the PostgreSQL documentation to fine-tune these settings so that it
is secure enough. is suitably secure.
Exit. .. literalinclude:: postgresql_service.Dockerfile
Build an image from the Dockerfile assign it a name.
.. code-block:: bash .. code-block:: bash
exit $ sudo docker build -t eg_postgresql .
Create an image from our container and assign it a name. The ``<container_id>`` And run the PostgreSQL server container (in the foreground):
is in the Bash prompt; you can also locate it using ``docker ps -a``.
.. code-block:: bash .. code-block:: bash
sudo docker commit <container_id> <your username>/postgresql $ sudo docker run -rm -P -name pg_test eg_postgresql
Finally, run the PostgreSQL server via ``docker``. There are 2 ways to connect to the PostgreSQL server. We can use
:ref:`working_with_links_names`, or we can access it from our host (or the network).
.. note:: The ``-rm`` removes the container and its image when the container
exists successfully.
Using container linking
^^^^^^^^^^^^^^^^^^^^^^^
Containers can be linked to another container's ports directly using
``-link remote_name:local_alias`` in the client's ``docker run``. This will
set a number of environment variables that can then be used to connect:
.. code-block:: bash .. code-block:: bash
CONTAINER=$(sudo docker run -d -p 5432 \ $ sudo docker run -rm -t -i -link pg_test:pg eg_postgresql bash
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.3/bin/postgres \
-D /var/lib/postgresql/9.3/main \
-c config_file=/etc/postgresql/9.3/main/postgresql.conf')
Connect the PostgreSQL server using ``psql`` (You will need the postgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password
postgresql client installed on the machine. For ubuntu, use something
like ``sudo apt-get install postgresql-client``). Connecting from your host system
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Assuming you have the postgresql-client installed, you can use the host-mapped port
to test as well. You need to use ``docker ps`` to find out what local host port the
container is mapped to first:
.. code-block:: bash .. code-block:: bash
CONTAINER_IP=$(sudo docker inspect -format='{{.NetworkSettings.IPAddress}}' $CONTAINER) $ docker ps
psql -h $CONTAINER_IP -p 5432 -d docker -U docker -W CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e24362f27f6 eg_postgresql:latest /usr/lib/postgresql/ About an hour ago Up About an hour 0.0.0.0:49153->5432/tcp pg_test
$ psql -h localhost -p 49153 -d docker -U docker --password
As before, create roles or databases if needed. Testing the database
^^^^^^^^^^^^^^^^^^^^
Once you have authenticated and have a ``docker =#`` prompt, you can
create a table and populate it.
.. code-block:: bash .. code-block:: bash
psql (9.3.1) psql (9.3.1)
Type "help" for help. Type "help" for help.
docker=# CREATE DATABASE foo OWNER=docker; docker=# CREATE TABLE cities (
CREATE DATABASE docker(# name varchar(80),
docker(# location point
docker(# );
CREATE TABLE
docker=# INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
INSERT 0 1
docker=# select * from cities;
name | location
---------------+-----------
San Francisco | (-194,53)
(1 row)
Additionally, publish your newly created image on the Docker Index. Using the container volumes
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can use the defined volumes to inspect the PostgreSQL log files and to backup your
configuration and data:
.. code-block:: bash .. code-block:: bash
sudo docker login docker run -rm --volumes-from pg_test -t -i busybox sh
Username: <your username>
[...]
.. code-block:: bash / # ls
bin etc lib linuxrc mnt proc run sys usr
dev home lib64 media opt root sbin tmp var
/ # ls /etc/postgresql/9.3/main/
environment pg_hba.conf postgresql.conf
pg_ctl.conf pg_ident.conf start.conf
/tmp # ls /var/log
ldconfig postgresql
sudo docker push <your username>/postgresql
PostgreSQL service auto-launch
------------------------------
Running our image seems complicated. We have to specify the whole command with
``docker run``. Let's simplify it so the service starts automatically when the
container starts.
.. code-block:: bash
sudo docker commit -run='{"Cmd": \
["/bin/su", "postgres", "-c", "/usr/lib/postgresql/9.3/bin/postgres -D \
/var/lib/postgresql/9.3/main -c \
config_file=/etc/postgresql/9.3/main/postgresql.conf"], "PortSpecs": ["5432"]}' \
<container_id> <your username>/postgresql
From now on, just type ``docker run <your username>/postgresql`` and
PostgreSQL should automatically start.

View File

@ -112,7 +112,7 @@ Once we've got a built image we can launch a container from it.
.. code-block:: bash .. code-block:: bash
sudo docker run -p 22 -p 80 -t -i <yourname>/supervisor sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file) 2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing 2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2013-11-25 18:53:22,342 INFO supervisord started with pid 1 2013-11-25 18:53:22,342 INFO supervisord started with pid 1

View File

@ -175,6 +175,7 @@ Linux:
- Gentoo - Gentoo
- ArchLinux - ArchLinux
- openSUSE 12.3+ - openSUSE 12.3+
- CRUX 3.0+
Cloud: Cloud:
@ -182,6 +183,12 @@ Cloud:
- Google Compute Engine - Google Compute Engine
- Rackspace - Rackspace
How do I report a security issue with Docker?
.............................................
You can learn about the project's security policy `here <http://www.docker.io/security/>`_
and report security issues to this `mailbox <mailto:security@docker.com>`_.
Can I help by adding some questions and answers? Can I help by adding some questions and answers?
................................................ ................................................

View File

@ -1,5 +1,5 @@
:title: Installation on Amazon EC2 :title: Installation on Amazon EC2
:description: Docker installation on Amazon EC2 :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: amazon ec2, virtualization, cloud, docker, documentation, installation :keywords: amazon ec2, virtualization, cloud, docker, documentation, installation
Amazon EC2 Amazon EC2

View File

@ -1,5 +1,5 @@
:title: Installation on Arch Linux :title: Installation on Arch Linux
:description: Docker installation on Arch Linux. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: arch linux, virtualization, docker, documentation, installation :keywords: arch linux, virtualization, docker, documentation, installation
.. _arch_linux: .. _arch_linux:

View File

@ -0,0 +1,98 @@
:title: Installation on CRUX Linux
:description: Docker installation on CRUX Linux.
:keywords: crux linux, virtualization, Docker, documentation, installation
.. _crux_linux:
CRUX Linux
==========
.. include:: install_header.inc
.. include:: install_unofficial.inc
Installing on CRUX Linux can be handled via the ports from `James Mills <http://prologic.shortcircuit.net.au/>`_:
* `docker <https://bitbucket.org/prologic/ports/src/tip/docker/>`_
* `docker-bin <https://bitbucket.org/prologic/ports/src/tip/docker-bin/>`_
* `docker-git <https://bitbucket.org/prologic/ports/src/tip/docker-git/>`_
The ``docker`` port will install the latest tagged version of Docker.
The ``docker-bin`` port will install the latest tagged versin of Docker from upstream built binaries.
The ``docker-git`` package will build from the current master branch.
Installation
------------
For the time being (*until the CRUX Docker port(s) get into the official contrib repository*) you will need to install
`James Mills' <https://bitbucket.org/prologic/ports>`_ ports repository. You can do so via:
Download the ``httpup`` file to ``/etc/ports/``:
::
curl -q -o - http://crux.nu/portdb/?a=getup&q=prologic > /etc/ports/prologic.httpup
Add ``prtdir /usr/ports/prologic`` to ``/etc/prt-get.conf``:
::
vim /etc/prt-get.conf
# or:
echo "prtdir /usr/ports/prologic" >> /etc/prt-get.conf
Update ports and prt-get cache:
::
ports -u
prt-get cache
To install (*and its dependencies*):
::
prt-get depinst docker
Use ``docker-bin`` for the upstream binary or ``docker-git`` to build and install from the master branch from git.
Kernel Requirements
-------------------
To have a working **CRUX+Docker** Host you must ensure your Kernel
has the necessary modules enabled for LXC containers to function
correctly and Docker Daemon to work properly.
Please read the ``README.rst``:
::
prt-get readme docker
There is a ``test_kernel_config.sh`` script in the above ports which you can use to test your Kernel configuration:
::
cd /usr/ports/prologic/docker
./test_kernel_config.sh /usr/src/linux/.config
Starting Docker
---------------
There is a rc script created for Docker. To start the Docker service:
::
sudo su -
/etc/rc.d/docker start
To start on system boot:
- Edit ``/etc/rc.conf``
- Put ``docker`` into the ``SERVICES=(...)`` array after ``net``.

View File

@ -1,4 +1,4 @@
:title: Requirements and Installation on Fedora :title: Installation on Fedora
:description: Please note this project is currently under heavy development. It should not be used in production. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, Fedora, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux :keywords: Docker, Docker documentation, Fedora, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux

View File

@ -1,5 +1,5 @@
:title: Installation on FrugalWare :title: Installation on FrugalWare
:description: Docker installation on FrugalWare. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: frugalware linux, virtualization, docker, documentation, installation :keywords: frugalware linux, virtualization, docker, documentation, installation
.. _frugalware: .. _frugalware:

View File

@ -1,5 +1,5 @@
:title: Installation on Gentoo Linux :title: Installation on Gentoo
:description: Docker installation instructions and nuances for Gentoo Linux. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: gentoo linux, virtualization, docker, documentation, installation :keywords: gentoo linux, virtualization, docker, documentation, installation
.. _gentoo_linux: .. _gentoo_linux:

View File

@ -50,18 +50,9 @@
docker-playground:~$ curl get.docker.io | bash docker-playground:~$ curl get.docker.io | bash
docker-playground:~$ sudo update-rc.d docker defaults docker-playground:~$ sudo update-rc.d docker defaults
6. If running in zones: ``us-central1-a``, ``europe-west1-1``, and ``europe-west1-b``, the docker daemon must be started with the ``-mtu`` flag. Without the flag, you may experience intermittent network pauses. 6. Start a new container:
`See this issue <https://code.google.com/p/google-compute-engine/issues/detail?id=57>`_ for more details.
.. code-block:: bash
docker-playground:~$ echo 'DOCKER_OPTS="$DOCKER_OPTS -mtu 1460"' | sudo tee -a /etc/default/docker
docker-playground:~$ sudo service docker restart
7. Start a new container:
.. code-block:: bash .. code-block:: bash
docker-playground:~$ sudo docker run busybox echo 'docker on GCE \o/' docker-playground:~$ sudo docker run busybox echo 'docker on GCE \o/'
docker on GCE \o/ docker on GCE \o/

View File

@ -21,6 +21,7 @@ Contents:
rhel rhel
fedora fedora
archlinux archlinux
cruxlinux
gentoolinux gentoolinux
openSUSE openSUSE
frugalware frugalware

View File

@ -1,4 +1,4 @@
:title: Requirements and Installation on Mac OS X 10.6 Snow Leopard :title: Installation on Mac OS X 10.6 Snow Leopard
:description: Please note this project is currently under heavy development. It should not be used in production. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, requirements, virtualbox, ssh, linux, os x, osx, mac :keywords: Docker, Docker documentation, requirements, virtualbox, ssh, linux, os x, osx, mac
@ -66,13 +66,13 @@ Run the following commands to get it downloaded and set up:
.. code-block:: bash .. code-block:: bash
# Get the file # Get the file
curl -o docker http://get.docker.io/builds/Darwin/x86_64/docker-latest curl -o docker https://get.docker.io/builds/Darwin/x86_64/docker-latest
# Mark it executable # Mark it executable
chmod +x docker chmod +x docker
# Set the environment variable for the docker daemon # Set the environment variable for the docker daemon
export DOCKER_HOST=tcp:// export DOCKER_HOST=tcp://127.0.0.1:4243
# Copy the executable file # Copy the executable file
sudo cp docker /usr/local/bin/ sudo cp docker /usr/local/bin/
@ -116,6 +116,21 @@ client just like any other application.
# Git commit (server): c348c04 # Git commit (server): c348c04
# Go version (server): go1.2 # Go version (server): go1.2
Forwarding VM Port Range to Host
--------------------------------
If we take the port range that docker uses by default with the -P option
(49000-49900), and forward same range from host to vm, we'll be able to interact
with our containers as if they were running locally:
.. code-block:: bash
# vm must be powered off
for i in {4900..49900}; do
VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port$i,tcp,,$i,,$i";
VBoxManage modifyvm "boot2docker-vm" --natpf1 "udp-port$i,udp,,$i,,$i";
done
SSH-ing The VM SSH-ing The VM
-------------- --------------
@ -147,6 +162,18 @@ If SSH complains about keys:
ssh-keygen -R '[localhost]:2022' ssh-keygen -R '[localhost]:2022'
Upgrading to a newer release of boot2docker
-------------------------------------------
To upgrade an initialised VM, you can use the following 3 commands. Your persistence
disk will not be changed, so you won't lose your images and containers:
.. code-block:: bash
./boot2docker stop
./boot2docker download
./boot2docker start
About the way Docker works on Mac OS X: About the way Docker works on Mac OS X:
--------------------------------------- ---------------------------------------

View File

@ -1,5 +1,5 @@
:title: Installation on openSUSE :title: Installation on openSUSE
:description: Docker installation on openSUSE. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: openSUSE, virtualbox, docker, documentation, installation :keywords: openSUSE, virtualbox, docker, documentation, installation
.. _openSUSE: .. _openSUSE:

View File

@ -1,5 +1,5 @@
:title: Rackspace Cloud Installation :title: Installation on Rackspace Cloud
:description: Installing Docker on Ubuntu proviced by Rackspace :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Rackspace Cloud, installation, docker, linux, ubuntu :keywords: Rackspace Cloud, installation, docker, linux, ubuntu
Rackspace Cloud Rackspace Cloud

View File

@ -1,4 +1,4 @@
:title: Requirements and Installation on Red Hat Enterprise Linux :title: Installation on Red Hat Enterprise Linux
:description: Please note this project is currently under heavy development. It should not be used in production. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, requirements, linux, rhel, centos :keywords: Docker, Docker documentation, requirements, linux, rhel, centos

View File

@ -1,4 +1,4 @@
:title: Requirements and Installation on Ubuntu Linux :title: Installation on Ubuntu
:description: Please note this project is currently under heavy development. It should not be used in production. :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux :keywords: Docker, Docker documentation, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux

View File

@ -1,11 +1,11 @@
:title: Requirements and Installation on Windows :title: Installation on Windows
:description: Docker's tutorial to run docker on Windows :description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, Windows, requirements, virtualbox, vagrant, git, ssh, putty, cygwin :keywords: Docker, Docker documentation, Windows, requirements, virtualbox, vagrant, git, ssh, putty, cygwin
.. _windows: .. _windows:
Installing Docker on Windows Windows
============================ =======
Docker can run on Windows using a VM like VirtualBox. You then run Docker can run on Windows using a VM like VirtualBox. You then run
Linux within the VM. Linux within the VM.

View File

@ -732,11 +732,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 500: server error :statuscode 500: server error

View File

@ -742,11 +742,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -761,11 +761,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -808,11 +808,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -852,11 +852,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -831,11 +831,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -958,11 +958,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -877,11 +877,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -892,11 +892,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -892,11 +892,11 @@ Tag an image into a repository
.. sourcecode:: http .. sourcecode:: http
HTTP/1.1 200 OK HTTP/1.1 201 OK
:query repo: The repository to tag in :query repo: The repository to tag in
:query force: 1/True/true or 0/False/false, default false :query force: 1/True/true or 0/False/false, default false
:statuscode 200: no error :statuscode 201: no error
:statuscode 400: bad parameter :statuscode 400: bad parameter
:statuscode 404: no such image :statuscode 404: no such image
:statuscode 409: conflict :statuscode 409: conflict

View File

@ -1,6 +1,6 @@
:title: Remote API Client Libraries :title: Remote API Client Libraries
:description: Various client libraries available to use with the Docker remote API :description: Various client libraries available to use with the Docker remote API
:keywords: API, Docker, index, registry, REST, documentation, clients, Python, Ruby, Javascript, Erlang, Go :keywords: API, Docker, index, registry, REST, documentation, clients, Python, Ruby, JavaScript, Erlang, Go
================================== ==================================
@ -21,12 +21,18 @@ and we will add the libraries here.
+----------------------+----------------+--------------------------------------------+----------+ +----------------------+----------------+--------------------------------------------+----------+
| Ruby | docker-api | https://github.com/swipely/docker-api | Active | | Ruby | docker-api | https://github.com/swipely/docker-api | Active |
+----------------------+----------------+--------------------------------------------+----------+ +----------------------+----------------+--------------------------------------------+----------+
| Javascript (NodeJS) | docker.io | https://github.com/appersonlabs/docker.io | Active | | JavaScript (NodeJS) | dockerode | https://github.com/apocas/dockerode | Active |
| | | Install via NPM: `npm install dockerode` | |
+----------------------+----------------+--------------------------------------------+----------+
| JavaScript (NodeJS) | docker.io | https://github.com/appersonlabs/docker.io | Active |
| | | Install via NPM: `npm install docker.io` | | | | | Install via NPM: `npm install docker.io` | |
+----------------------+----------------+--------------------------------------------+----------+ +----------------------+----------------+--------------------------------------------+----------+
| Javascript | docker-js | https://github.com/dgoujard/docker-js | Active | | JavaScript | docker-js | https://github.com/dgoujard/docker-js | Active |
+----------------------+----------------+--------------------------------------------+----------+ +----------------------+----------------+--------------------------------------------+----------+
| Javascript (Angular) | dockerui | https://github.com/crosbymichael/dockerui | Active | | JavaScript (Angular) | docker-cp | https://github.com/13W/docker-cp | Active |
| **WebUI** | | | |
+----------------------+----------------+--------------------------------------------+----------+
| JavaScript (Angular) | dockerui | https://github.com/crosbymichael/dockerui | Active |
| **WebUI** | | | | | **WebUI** | | | |
+----------------------+----------------+--------------------------------------------+----------+ +----------------------+----------------+--------------------------------------------+----------+
| Java | docker-java | https://github.com/kpelykh/docker-java | Active | | Java | docker-java | https://github.com/kpelykh/docker-java | Active |

View File

@ -251,9 +251,14 @@ value ``<value>``. This value will be passed to all future ``RUN``
instructions. This is functionally equivalent to prefixing the command instructions. This is functionally equivalent to prefixing the command
with ``<key>=<value>`` with ``<key>=<value>``
The environment variables set using ``ENV`` will persist when a container is run
from the resulting image. You can view the values using ``docker inspect``, and change them using ``docker run --env <key>=<value>``.
.. note:: .. note::
The environment variables will persist when a container is run One example where this can cause unexpected consequenses, is setting
from the resulting image. ``ENV DEBIAN_FRONTEND noninteractive``.
Which will persist when the container is run interactively; for example:
``docker run -t -i image bash``
.. _dockerfile_add: .. _dockerfile_add:
@ -269,7 +274,7 @@ the container's filesystem at path ``<dest>``.
source directory being built (also called the *context* of the build) or source directory being built (also called the *context* of the build) or
a remote file URL. a remote file URL.
``<dest>`` is the path at which the source will be copied in the ``<dest>`` is the absolute path to which the source will be copied inside the
destination container. destination container.
All new files and directories are created with mode 0755, uid and gid All new files and directories are created with mode 0755, uid and gid
@ -399,8 +404,10 @@ the image.
``WORKDIR /path/to/workdir`` ``WORKDIR /path/to/workdir``
The ``WORKDIR`` instruction sets the working directory in which The ``WORKDIR`` instruction sets the working directory for the ``RUN``, ``CMD`` and
the command given by ``CMD`` is executed. ``ENTRYPOINT`` Dockerfile commands that follow it.
It can be used multiple times in the one Dockerfile.
3.11 ONBUILD 3.11 ONBUILD
------------ ------------

View File

@ -12,7 +12,7 @@ To list available commands, either run ``docker`` with no parameters or execute
$ sudo docker $ sudo docker
Usage: docker [OPTIONS] COMMAND [arg...] Usage: docker [OPTIONS] COMMAND [arg...]
-H=[unix:///var/run/docker.sock]: tcp://[host[:port]] to bind/connect to or unix://[/path/to/socket] to use. When host=[0.0.0.0], port=[4243] or path=[/var/run/docker.sock] is omitted, default values are used. -H=[unix:///var/run/docker.sock]: tcp://[host]:port to bind/connect to or unix://[/path/to/socket] to use. When host=[127.0.0.1] is omitted for tcp or path=[/var/run/docker.sock] is omitted for unix sockets, default values are used.
A self-sufficient runtime for linux containers. A self-sufficient runtime for linux containers.
@ -102,12 +102,17 @@ the ``-H`` flag for the client.
docker ps docker ps
# both are equal # both are equal
To run the daemon with `systemd socket activation <http://0pointer.de/blog/projects/socket-activation.html>`_, use ``docker -d -H fd://``. To run the daemon with `systemd socket activation <http://0pointer.de/blog/projects/socket-activation.html>`_, use ``docker -d -H fd://``.
Using ``fd://`` will work perfectly for most setups but you can also specify individual sockets too ``docker -d -H fd://3``. Using ``fd://`` will work perfectly for most setups but you can also specify individual sockets too ``docker -d -H fd://3``.
If the specified socket activated files aren't found then docker will exit. If the specified socket activated files aren't found then docker will exit.
You can find examples of using systemd socket activation with docker and systemd in the `docker source tree <https://github.com/dotcloud/docker/blob/master/contrib/init/systemd/socket-activation/>`_. You can find examples of using systemd socket activation with docker and systemd in the `docker source tree <https://github.com/dotcloud/docker/blob/master/contrib/init/systemd/socket-activation/>`_.
.. warning::
Docker and LXC do not support the use of softlinks for either the Docker data directory (``/var/lib/docker``) or for ``/tmp``.
If your system is likely to be set up in that way, you can use ``readlink -f`` to canonicalise the links:
``TMPDIR=$(readlink -f /tmp) /usr/local/bin/docker -d -D -g $(readlink -f /var/lib/docker) -H unix:// $EXPOSE_ALL > /var/lib/boot2docker/docker.log 2>&1``
.. _cli_attach: .. _cli_attach:
``attach`` ``attach``
@ -181,7 +186,7 @@ Examples:
Build a new container image from the source code at PATH Build a new container image from the source code at PATH
-t, --time="": Repository name (and optionally a tag) to be applied -t, --time="": Repository name (and optionally a tag) to be applied
to the resulting image in case of success. to the resulting image in case of success.
-q, --quiet=false: Suppress verbose build output. -q, --quiet=false: Suppress the verbose output generated by the containers.
--no-cache: Do not use the cache when building the image. --no-cache: Do not use the cache when building the image.
--rm: Remove intermediate containers after a successful build --rm: Remove intermediate containers after a successful build
@ -189,7 +194,8 @@ The files at ``PATH`` or ``URL`` are called the "context" of the build. The
build process may refer to any of the files in the context, for example when build process may refer to any of the files in the context, for example when
using an :ref:`ADD <dockerfile_add>` instruction. When a single ``Dockerfile`` using an :ref:`ADD <dockerfile_add>` instruction. When a single ``Dockerfile``
is given as ``URL``, then no context is set. When a Git repository is set as is given as ``URL``, then no context is set. When a Git repository is set as
``URL``, then the repository is used as the context ``URL``, then the repository is used as the context. Git repositories are
cloned with their submodules (`git clone --recursive`).
.. _cli_build_examples: .. _cli_build_examples:
@ -1083,6 +1089,10 @@ is, ``docker run`` is equivalent to the API ``/containers/create`` then
The ``docker run`` command can be used in combination with ``docker commit`` to The ``docker run`` command can be used in combination with ``docker commit`` to
:ref:`change the command that a container runs <cli_commit_examples>`. :ref:`change the command that a container runs <cli_commit_examples>`.
See :ref:`port_redirection` for more detailed information about the ``--expose``,
``-p``, ``-P`` and ``--link`` parameters, and :ref:`working_with_links_names` for
specific examples using ``--link``.
Known Issues (run -volumes-from) Known Issues (run -volumes-from)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -143,6 +143,7 @@ Network Settings
---------------- ----------------
:: ::
-n=true : Enable networking for this container -n=true : Enable networking for this container
-dns=[] : Set custom dns servers for the container -dns=[] : Set custom dns servers for the container

View File

@ -59,10 +59,10 @@ Bind Docker to another host/port or a Unix socket
.. warning:: Changing the default ``docker`` daemon binding to a TCP .. warning:: Changing the default ``docker`` daemon binding to a TCP
port or Unix *docker* user group will increase your security risks port or Unix *docker* user group will increase your security risks
by allowing non-root users to potentially gain *root* access on the by allowing non-root users to gain *root* access on the
host (`e.g. #1369 host. Make sure you control access to ``docker``. If you are binding
<https://github.com/dotcloud/docker/issues/1369>`_). Make sure you to a TCP port, anyone with access to that port has full Docker access;
control access to ``docker``. so it is not advisable on an open network.
With ``-H`` it is possible to make the Docker daemon to listen on a With ``-H`` it is possible to make the Docker daemon to listen on a
specific IP and port. By default, it will listen on specific IP and port. By default, it will listen on

View File

@ -31,6 +31,15 @@ container, Docker provide ways to bind the container port to an
interface of the host system. To simplify communication between interface of the host system. To simplify communication between
containers, Docker provides the linking mechanism. containers, Docker provides the linking mechanism.
Auto map all exposed ports on the host
--------------------------------------
To bind all the exposed container ports to the host automatically, use
``docker run -P <imageid>``. The mapped host ports will be auto-selected
from a pool of unused ports (49000..49900), and you will need to use
``docker ps``, ``docker inspect <container_id>`` or
``docker port <container_id> <port>`` to determine what they are.
Binding a port to a host interface Binding a port to a host interface
----------------------------------- -----------------------------------

View File

@ -101,13 +101,23 @@ might not work on any other machine.
For example:: For example::
sudo docker run -v /var/logs:/var/host_logs:ro ubuntu bash sudo docker run -t -i -v /var/logs:/var/host_logs:ro ubuntu bash
The command above mounts the host directory ``/var/logs`` into the The command above mounts the host directory ``/var/logs`` into the
container with read only permissions as ``/var/host_logs``. container with read only permissions as ``/var/host_logs``.
.. versionadded:: v0.5.0 .. versionadded:: v0.5.0
Note for OS/X users and remote daemon users:
--------------------------------------------
OS/X users run ``boot2docker`` to create a minimalist virtual machine running the docker daemon. That
virtual machine then launches docker commands on behalf of the OS/X command line. The means that ``host
directories`` refer to directories in the ``boot2docker`` virtual machine, not the OS/X filesystem.
Similarly, anytime when the docker daemon is on a remote machine, the ``host directories`` always refer to directories on the daemon's machine.
Known Issues Known Issues
............ ............

View File

@ -4,6 +4,7 @@ import (
"io/ioutil" "io/ioutil"
"os" "os"
"path" "path"
"path/filepath"
"testing" "testing"
) )
@ -64,6 +65,18 @@ func TestEngineRoot(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
defer os.RemoveAll(tmp) defer os.RemoveAll(tmp)
// We expect Root to resolve to an absolute path.
// FIXME: this should not be necessary.
// Until the above FIXME is implemented, let's check for the
// current behavior.
tmp, err = filepath.EvalSymlinks(tmp)
if err != nil {
t.Fatal(err)
}
tmp, err = filepath.Abs(tmp)
if err != nil {
t.Fatal(err)
}
dir := path.Join(tmp, "dir") dir := path.Join(tmp, "dir")
eng, err := New(dir) eng, err := New(dir)
if err != nil { if err != nil {

View File

@ -99,6 +99,8 @@ type Command struct {
Network *Network `json:"network"` // if network is nil then networking is disabled Network *Network `json:"network"` // if network is nil then networking is disabled
Config []string `json:"config"` // generic values that specific drivers can consume Config []string `json:"config"` // generic values that specific drivers can consume
Resources *Resources `json:"resources"` Resources *Resources `json:"resources"`
Console string `json:"-"`
} }
// Return the pid of the process // Return the pid of the process

View File

@ -279,7 +279,8 @@ func (i *info) IsRunning() bool {
output, err := i.driver.getInfo(i.ID) output, err := i.driver.getInfo(i.ID)
if err != nil { if err != nil {
panic(err) utils.Errorf("Error getting info for lxc container %s: %s (%s)", i.ID, err, output)
return false
} }
if strings.Contains(string(output), "RUNNING") { if strings.Contains(string(output), "RUNNING") {
running = true running = true

View File

@ -4,11 +4,10 @@ import (
"fmt" "fmt"
"github.com/dotcloud/docker/execdriver" "github.com/dotcloud/docker/execdriver"
"github.com/dotcloud/docker/pkg/netlink" "github.com/dotcloud/docker/pkg/netlink"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/pkg/user"
"github.com/syndtr/gocapability/capability" "github.com/syndtr/gocapability/capability"
"net" "net"
"os" "os"
"strconv"
"strings" "strings"
"syscall" "syscall"
) )
@ -79,35 +78,28 @@ func setupWorkingDirectory(args *execdriver.InitArgs) error {
// Takes care of dropping privileges to the desired user // Takes care of dropping privileges to the desired user
func changeUser(args *execdriver.InitArgs) error { func changeUser(args *execdriver.InitArgs) error {
if args.User == "" { uid, gid, suppGids, err := user.GetUserGroupSupplementary(
return nil args.User,
} syscall.Getuid(), syscall.Getgid(),
userent, err := utils.UserLookup(args.User) )
if err != nil { if err != nil {
return fmt.Errorf("Unable to find user %v: %v", args.User, err) return err
} }
uid, err := strconv.Atoi(userent.Uid) if err := syscall.Setgroups(suppGids); err != nil {
if err != nil { return fmt.Errorf("Setgroups failed: %v", err)
return fmt.Errorf("Invalid uid: %v", userent.Uid)
} }
gid, err := strconv.Atoi(userent.Gid)
if err != nil {
return fmt.Errorf("Invalid gid: %v", userent.Gid)
}
if err := syscall.Setgid(gid); err != nil { if err := syscall.Setgid(gid); err != nil {
return fmt.Errorf("setgid failed: %v", err) return fmt.Errorf("Setgid failed: %v", err)
} }
if err := syscall.Setuid(uid); err != nil { if err := syscall.Setuid(uid); err != nil {
return fmt.Errorf("setuid failed: %v", err) return fmt.Errorf("Setuid failed: %v", err)
} }
return nil return nil
} }
func setupCapabilities(args *execdriver.InitArgs) error { func setupCapabilities(args *execdriver.InitArgs) error {
if args.Privileged { if args.Privileged {
return nil return nil
} }
@ -127,6 +119,7 @@ func setupCapabilities(args *execdriver.InitArgs) error {
capability.CAP_AUDIT_CONTROL, capability.CAP_AUDIT_CONTROL,
capability.CAP_MAC_OVERRIDE, capability.CAP_MAC_OVERRIDE,
capability.CAP_MAC_ADMIN, capability.CAP_MAC_ADMIN,
capability.CAP_NET_ADMIN,
} }
c, err := capability.NewPid(os.Getpid()) c, err := capability.NewPid(os.Getpid())

View File

@ -15,6 +15,7 @@ lxc.network.name = eth0
{{else}} {{else}}
# network is disabled (-n=false) # network is disabled (-n=false)
lxc.network.type = empty lxc.network.type = empty
lxc.network.flags = up
{{end}} {{end}}
# root filesystem # root filesystem
@ -79,6 +80,10 @@ lxc.mount.entry = proc {{escapeFstabSpaces $ROOTFS}}/proc proc nosuid,nodev,noex
# if your userspace allows it. eg. see http://bit.ly/T9CkqJ # if your userspace allows it. eg. see http://bit.ly/T9CkqJ
lxc.mount.entry = sysfs {{escapeFstabSpaces $ROOTFS}}/sys sysfs nosuid,nodev,noexec 0 0 lxc.mount.entry = sysfs {{escapeFstabSpaces $ROOTFS}}/sys sysfs nosuid,nodev,noexec 0 0
{{if .Tty}}
lxc.mount.entry = {{.Console}} {{escapeFstabSpaces $ROOTFS}}/dev/console none bind,rw 0 0
{{end}}
lxc.mount.entry = devpts {{escapeFstabSpaces $ROOTFS}}/dev/pts devpts newinstance,ptmxmode=0666,nosuid,noexec 0 0 lxc.mount.entry = devpts {{escapeFstabSpaces $ROOTFS}}/dev/pts devpts newinstance,ptmxmode=0666,nosuid,noexec 0 0
lxc.mount.entry = shm {{escapeFstabSpaces $ROOTFS}}/dev/shm tmpfs size=65536k,nosuid,nodev,noexec 0 0 lxc.mount.entry = shm {{escapeFstabSpaces $ROOTFS}}/dev/shm tmpfs size=65536k,nosuid,nodev,noexec 0 0

View File

@ -3,7 +3,9 @@ package docker
import ( import (
"fmt" "fmt"
"github.com/dotcloud/docker/archive" "github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/dockerversion"
"github.com/dotcloud/docker/graphdriver" "github.com/dotcloud/docker/graphdriver"
"github.com/dotcloud/docker/runconfig"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"io" "io"
"io/ioutil" "io/ioutil"
@ -125,12 +127,12 @@ func (graph *Graph) Get(name string) (*Image, error) {
} }
// Create creates a new image and registers it in the graph. // Create creates a new image and registers it in the graph.
func (graph *Graph) Create(layerData archive.Archive, container *Container, comment, author string, config *Config) (*Image, error) { func (graph *Graph) Create(layerData archive.ArchiveReader, container *Container, comment, author string, config *runconfig.Config) (*Image, error) {
img := &Image{ img := &Image{
ID: GenerateID(), ID: GenerateID(),
Comment: comment, Comment: comment,
Created: time.Now().UTC(), Created: time.Now().UTC(),
DockerVersion: VERSION, DockerVersion: dockerversion.VERSION,
Author: author, Author: author,
Config: config, Config: config,
Architecture: runtime.GOARCH, Architecture: runtime.GOARCH,
@ -149,7 +151,7 @@ func (graph *Graph) Create(layerData archive.Archive, container *Container, comm
// Register imports a pre-existing image into the graph. // Register imports a pre-existing image into the graph.
// FIXME: pass img as first argument // FIXME: pass img as first argument
func (graph *Graph) Register(jsonData []byte, layerData archive.Archive, img *Image) (err error) { func (graph *Graph) Register(jsonData []byte, layerData archive.ArchiveReader, img *Image) (err error) {
defer func() { defer func() {
// If any error occurs, remove the new dir from the driver. // If any error occurs, remove the new dir from the driver.
// Don't check for errors since the dir might not have been created. // Don't check for errors since the dir might not have been created.
@ -224,7 +226,9 @@ func (graph *Graph) TempLayerArchive(id string, compression archive.Compression,
if err != nil { if err != nil {
return nil, err return nil, err
} }
return archive.NewTempArchive(utils.ProgressReader(ioutil.NopCloser(a), 0, output, sf, false, utils.TruncateID(id), "Buffering to disk"), tmp) progress := utils.ProgressReader(a, 0, output, sf, false, utils.TruncateID(id), "Buffering to disk")
defer progress.Close()
return archive.NewTempArchive(progress, tmp)
} }
// Mktemp creates a temporary sub-directory inside the graph's filesystem. // Mktemp creates a temporary sub-directory inside the graph's filesystem.

View File

@ -271,7 +271,7 @@ func (a *Driver) Diff(id string) (archive.Archive, error) {
}) })
} }
func (a *Driver) ApplyDiff(id string, diff archive.Archive) error { func (a *Driver) ApplyDiff(id string, diff archive.ArchiveReader) error {
return archive.Untar(diff, path.Join(a.rootPath(), "diff", id), nil) return archive.Untar(diff, path.Join(a.rootPath(), "diff", id), nil)
} }

View File

@ -12,6 +12,7 @@ import (
"path" "path"
"path/filepath" "path/filepath"
"strconv" "strconv"
"strings"
"sync" "sync"
"time" "time"
) )
@ -29,6 +30,15 @@ type DevInfo struct {
TransactionId uint64 `json:"transaction_id"` TransactionId uint64 `json:"transaction_id"`
Initialized bool `json:"initialized"` Initialized bool `json:"initialized"`
devices *DeviceSet `json:"-"` devices *DeviceSet `json:"-"`
mountCount int `json:"-"`
mountPath string `json:"-"`
// A floating mount means one reference is not owned and
// will be stolen by the next mount. This allows us to
// avoid unmounting directly after creation before the
// first get (since we need to mount to set up the device
// a bit first).
floating bool `json:"-"`
} }
type MetaData struct { type MetaData struct {
@ -43,7 +53,7 @@ type DeviceSet struct {
TransactionId uint64 TransactionId uint64
NewTransactionId uint64 NewTransactionId uint64
nextFreeDevice int nextFreeDevice int
activeMounts map[string]int sawBusy bool
} }
type DiskUsage struct { type DiskUsage struct {
@ -69,6 +79,14 @@ type DevStatus struct {
HighestMappedSector uint64 HighestMappedSector uint64
} }
type UnmountMode int
const (
UnmountRegular UnmountMode = iota
UnmountFloat
UnmountSink
)
func getDevName(name string) string { func getDevName(name string) string {
return "/dev/mapper/" + name return "/dev/mapper/" + name
} }
@ -290,7 +308,7 @@ func (devices *DeviceSet) setupBaseImage() error {
if oldInfo != nil && !oldInfo.Initialized { if oldInfo != nil && !oldInfo.Initialized {
utils.Debugf("Removing uninitialized base image") utils.Debugf("Removing uninitialized base image")
if err := devices.removeDevice(""); err != nil { if err := devices.deleteDevice(""); err != nil {
utils.Debugf("\n--->Err: %s\n", err) utils.Debugf("\n--->Err: %s\n", err)
return err return err
} }
@ -355,6 +373,10 @@ func (devices *DeviceSet) log(level int, file string, line int, dmError int, mes
return // Ignore _LOG_DEBUG return // Ignore _LOG_DEBUG
} }
if strings.Contains(message, "busy") {
devices.sawBusy = true
}
utils.Debugf("libdevmapper(%d): %s:%d (%d) %s", level, file, line, dmError, message) utils.Debugf("libdevmapper(%d): %s:%d (%d) %s", level, file, line, dmError, message)
} }
@ -562,7 +584,7 @@ func (devices *DeviceSet) AddDevice(hash, baseHash string) error {
return nil return nil
} }
func (devices *DeviceSet) removeDevice(hash string) error { func (devices *DeviceSet) deleteDevice(hash string) error {
info := devices.Devices[hash] info := devices.Devices[hash]
if info == nil { if info == nil {
return fmt.Errorf("hash %s doesn't exists", hash) return fmt.Errorf("hash %s doesn't exists", hash)
@ -579,7 +601,7 @@ func (devices *DeviceSet) removeDevice(hash string) error {
devinfo, _ := getInfo(info.Name()) devinfo, _ := getInfo(info.Name())
if devinfo != nil && devinfo.Exists != 0 { if devinfo != nil && devinfo.Exists != 0 {
if err := removeDevice(info.Name()); err != nil { if err := devices.removeDeviceAndWait(info.Name()); err != nil {
utils.Debugf("Error removing device: %s\n", err) utils.Debugf("Error removing device: %s\n", err)
return err return err
} }
@ -610,33 +632,45 @@ func (devices *DeviceSet) removeDevice(hash string) error {
return nil return nil
} }
func (devices *DeviceSet) RemoveDevice(hash string) error { func (devices *DeviceSet) DeleteDevice(hash string) error {
devices.Lock() devices.Lock()
defer devices.Unlock() defer devices.Unlock()
return devices.removeDevice(hash) return devices.deleteDevice(hash)
} }
func (devices *DeviceSet) deactivateDevice(hash string) error { func (devices *DeviceSet) deactivatePool() error {
utils.Debugf("[devmapper] deactivateDevice(%s)", hash) utils.Debugf("[devmapper] deactivatePool()")
defer utils.Debugf("[devmapper] deactivateDevice END") defer utils.Debugf("[devmapper] deactivatePool END")
var devname string devname := devices.getPoolDevName()
// FIXME: shouldn't we just register the pool into devices?
devname, err := devices.byHash(hash)
if err != nil {
return err
}
devinfo, err := getInfo(devname) devinfo, err := getInfo(devname)
if err != nil { if err != nil {
utils.Debugf("\n--->Err: %s\n", err) utils.Debugf("\n--->Err: %s\n", err)
return err return err
} }
if devinfo.Exists != 0 { if devinfo.Exists != 0 {
if err := removeDevice(devname); err != nil { return removeDevice(devname)
}
return nil
}
func (devices *DeviceSet) deactivateDevice(hash string) error {
utils.Debugf("[devmapper] deactivateDevice(%s)", hash)
defer utils.Debugf("[devmapper] deactivateDevice END")
info := devices.Devices[hash]
if info == nil {
return fmt.Errorf("Unknown device %s", hash)
}
devinfo, err := getInfo(info.Name())
if err != nil {
utils.Debugf("\n--->Err: %s\n", err) utils.Debugf("\n--->Err: %s\n", err)
return err return err
} }
if err := devices.waitRemove(hash); err != nil { if devinfo.Exists != 0 {
if err := devices.removeDeviceAndWait(info.Name()); err != nil {
utils.Debugf("\n--->Err: %s\n", err)
return err return err
} }
} }
@ -644,16 +678,41 @@ func (devices *DeviceSet) deactivateDevice(hash string) error {
return nil return nil
} }
// waitRemove blocks until either: // Issues the underlying dm remove operation and then waits
// a) the device registered at <device_set_prefix>-<hash> is removed, // for it to finish.
// or b) the 1 second timeout expires. func (devices *DeviceSet) removeDeviceAndWait(devname string) error {
func (devices *DeviceSet) waitRemove(hash string) error { var err error
utils.Debugf("[deviceset %s] waitRemove(%s)", devices.devicePrefix, hash)
defer utils.Debugf("[deviceset %s] waitRemove(%) END", devices.devicePrefix, hash) for i := 0; i < 10; i++ {
devname, err := devices.byHash(hash) devices.sawBusy = false
err = removeDevice(devname)
if err == nil {
break
}
if !devices.sawBusy {
return err
}
// If we see EBUSY it may be a transient error,
// sleep a bit a retry a few times.
time.Sleep(5 * time.Millisecond)
}
if err != nil { if err != nil {
return err return err
} }
if err := devices.waitRemove(devname); err != nil {
return err
}
return nil
}
// waitRemove blocks until either:
// a) the device registered at <device_set_prefix>-<hash> is removed,
// or b) the 1 second timeout expires.
func (devices *DeviceSet) waitRemove(devname string) error {
utils.Debugf("[deviceset %s] waitRemove(%s)", devices.devicePrefix, devname)
defer utils.Debugf("[deviceset %s] waitRemove(%s) END", devices.devicePrefix, devname)
i := 0 i := 0
for ; i < 1000; i += 1 { for ; i < 1000; i += 1 {
devinfo, err := getInfo(devname) devinfo, err := getInfo(devname)
@ -681,18 +740,18 @@ func (devices *DeviceSet) waitRemove(hash string) error {
// a) the device registered at <device_set_prefix>-<hash> is closed, // a) the device registered at <device_set_prefix>-<hash> is closed,
// or b) the 1 second timeout expires. // or b) the 1 second timeout expires.
func (devices *DeviceSet) waitClose(hash string) error { func (devices *DeviceSet) waitClose(hash string) error {
devname, err := devices.byHash(hash) info := devices.Devices[hash]
if err != nil { if info == nil {
return err return fmt.Errorf("Unknown device %s", hash)
} }
i := 0 i := 0
for ; i < 1000; i += 1 { for ; i < 1000; i += 1 {
devinfo, err := getInfo(devname) devinfo, err := getInfo(info.Name())
if err != nil { if err != nil {
return err return err
} }
if i%100 == 0 { if i%100 == 0 {
utils.Debugf("Waiting for unmount of %s: opencount=%d", devname, devinfo.OpenCount) utils.Debugf("Waiting for unmount of %s: opencount=%d", hash, devinfo.OpenCount)
} }
if devinfo.OpenCount == 0 { if devinfo.OpenCount == 0 {
break break
@ -700,26 +759,11 @@ func (devices *DeviceSet) waitClose(hash string) error {
time.Sleep(1 * time.Millisecond) time.Sleep(1 * time.Millisecond)
} }
if i == 1000 { if i == 1000 {
return fmt.Errorf("Timeout while waiting for device %s to close", devname) return fmt.Errorf("Timeout while waiting for device %s to close", hash)
} }
return nil return nil
} }
// byHash is a hack to allow looking up the deviceset's pool by the hash "pool".
// FIXME: it seems probably cleaner to register the pool in devices.Devices,
// but I am afraid of arcane implications deep in the devicemapper code,
// so this will do.
func (devices *DeviceSet) byHash(hash string) (devname string, err error) {
if hash == "pool" {
return devices.getPoolDevName(), nil
}
info := devices.Devices[hash]
if info == nil {
return "", fmt.Errorf("hash %s doesn't exists", hash)
}
return info.Name(), nil
}
func (devices *DeviceSet) Shutdown() error { func (devices *DeviceSet) Shutdown() error {
devices.Lock() devices.Lock()
defer devices.Unlock() defer devices.Unlock()
@ -728,13 +772,12 @@ func (devices *DeviceSet) Shutdown() error {
utils.Debugf("[devmapper] Shutting down DeviceSet: %s", devices.root) utils.Debugf("[devmapper] Shutting down DeviceSet: %s", devices.root)
defer utils.Debugf("[deviceset %s] shutdown END", devices.devicePrefix) defer utils.Debugf("[deviceset %s] shutdown END", devices.devicePrefix)
for path, count := range devices.activeMounts { for _, info := range devices.Devices {
for i := count; i > 0; i-- { if info.mountCount > 0 {
if err := sysUnmount(path, 0); err != nil { if err := sysUnmount(info.mountPath, 0); err != nil {
utils.Debugf("Shutdown unmounting %s, error: %s\n", path, err) utils.Debugf("Shutdown unmounting %s, error: %s\n", info.mountPath, err)
} }
} }
delete(devices.activeMounts, path)
} }
for _, d := range devices.Devices { for _, d := range devices.Devices {
@ -746,32 +789,42 @@ func (devices *DeviceSet) Shutdown() error {
} }
} }
pool := devices.getPoolDevName() if err := devices.deactivatePool(); err != nil {
if devinfo, err := getInfo(pool); err == nil && devinfo.Exists != 0 { utils.Debugf("Shutdown deactivate pool , error: %s\n", err)
if err := devices.deactivateDevice("pool"); err != nil {
utils.Debugf("Shutdown deactivate %s , error: %s\n", pool, err)
}
} }
return nil return nil
} }
func (devices *DeviceSet) MountDevice(hash, path string, readOnly bool) error { func (devices *DeviceSet) MountDevice(hash, path string) error {
devices.Lock() devices.Lock()
defer devices.Unlock() defer devices.Unlock()
info := devices.Devices[hash]
if info == nil {
return fmt.Errorf("Unknown device %s", hash)
}
if info.mountCount > 0 {
if path != info.mountPath {
return fmt.Errorf("Trying to mount devmapper device in multple places (%s, %s)", info.mountPath, path)
}
if info.floating {
// Steal floating ref
info.floating = false
} else {
info.mountCount++
}
return nil
}
if err := devices.activateDeviceIfNeeded(hash); err != nil { if err := devices.activateDeviceIfNeeded(hash); err != nil {
return fmt.Errorf("Error activating devmapper device for '%s': %s", hash, err) return fmt.Errorf("Error activating devmapper device for '%s': %s", hash, err)
} }
info := devices.Devices[hash]
var flags uintptr = sysMsMgcVal var flags uintptr = sysMsMgcVal
if readOnly {
flags = flags | sysMsRdOnly
}
err := sysMount(info.DevName(), path, "ext4", flags, "discard") err := sysMount(info.DevName(), path, "ext4", flags, "discard")
if err != nil && err == sysEInval { if err != nil && err == sysEInval {
err = sysMount(info.DevName(), path, "ext4", flags, "") err = sysMount(info.DevName(), path, "ext4", flags, "")
@ -780,20 +833,53 @@ func (devices *DeviceSet) MountDevice(hash, path string, readOnly bool) error {
return fmt.Errorf("Error mounting '%s' on '%s': %s", info.DevName(), path, err) return fmt.Errorf("Error mounting '%s' on '%s': %s", info.DevName(), path, err)
} }
count := devices.activeMounts[path] info.mountCount = 1
devices.activeMounts[path] = count + 1 info.mountPath = path
info.floating = false
return devices.setInitialized(hash) return devices.setInitialized(hash)
} }
func (devices *DeviceSet) UnmountDevice(hash, path string, deactivate bool) error { func (devices *DeviceSet) UnmountDevice(hash string, mode UnmountMode) error {
utils.Debugf("[devmapper] UnmountDevice(hash=%s path=%s)", hash, path) utils.Debugf("[devmapper] UnmountDevice(hash=%s, mode=%d)", hash, mode)
defer utils.Debugf("[devmapper] UnmountDevice END") defer utils.Debugf("[devmapper] UnmountDevice END")
devices.Lock() devices.Lock()
defer devices.Unlock() defer devices.Unlock()
utils.Debugf("[devmapper] Unmount(%s)", path) info := devices.Devices[hash]
if err := sysUnmount(path, 0); err != nil { if info == nil {
return fmt.Errorf("UnmountDevice: no such device %s\n", hash)
}
if mode == UnmountFloat {
if info.floating {
return fmt.Errorf("UnmountDevice: can't float floating reference %s\n", hash)
}
// Leave this reference floating
info.floating = true
return nil
}
if mode == UnmountSink {
if !info.floating {
// Someone already sunk this
return nil
}
// Otherwise, treat this as a regular unmount
}
if info.mountCount == 0 {
return fmt.Errorf("UnmountDevice: device not-mounted id %s\n", hash)
}
info.mountCount--
if info.mountCount > 0 {
return nil
}
utils.Debugf("[devmapper] Unmount(%s)", info.mountPath)
if err := sysUnmount(info.mountPath, 0); err != nil {
utils.Debugf("\n--->Err: %s\n", err) utils.Debugf("\n--->Err: %s\n", err)
return err return err
} }
@ -804,15 +890,9 @@ func (devices *DeviceSet) UnmountDevice(hash, path string, deactivate bool) erro
return err return err
} }
if count := devices.activeMounts[path]; count > 1 {
devices.activeMounts[path] = count - 1
} else {
delete(devices.activeMounts, path)
}
if deactivate {
devices.deactivateDevice(hash) devices.deactivateDevice(hash)
}
info.mountPath = ""
return nil return nil
} }
@ -957,7 +1037,6 @@ func NewDeviceSet(root string, doInit bool) (*DeviceSet, error) {
devices := &DeviceSet{ devices := &DeviceSet{
root: root, root: root,
MetaData: MetaData{Devices: make(map[string]*DevInfo)}, MetaData: MetaData{Devices: make(map[string]*DevInfo)},
activeMounts: make(map[string]int),
} }
if err := devices.initDevmapper(doInit); err != nil { if err := devices.initDevmapper(doInit); err != nil {

View File

@ -324,7 +324,7 @@ func createPool(poolName string, dataFile, metadataFile *osFile) error {
return fmt.Errorf("Can't get data size") return fmt.Errorf("Can't get data size")
} }
params := metadataFile.Name() + " " + dataFile.Name() + " 128 32768" params := metadataFile.Name() + " " + dataFile.Name() + " 128 32768 1 skip_block_zeroing"
if err := task.AddTarget(0, size/512, "thin-pool", params); err != nil { if err := task.AddTarget(0, size/512, "thin-pool", params); err != nil {
return fmt.Errorf("Can't add target") return fmt.Errorf("Can't add target")
} }

View File

@ -7,8 +7,8 @@ import (
"github.com/dotcloud/docker/graphdriver" "github.com/dotcloud/docker/graphdriver"
"github.com/dotcloud/docker/utils" "github.com/dotcloud/docker/utils"
"io/ioutil" "io/ioutil"
"os"
"path" "path"
"sync"
) )
func init() { func init() {
@ -23,8 +23,6 @@ func init() {
type Driver struct { type Driver struct {
*DeviceSet *DeviceSet
home string home string
sync.Mutex // Protects concurrent modification to active
active map[string]int
} }
var Init = func(home string) (graphdriver.Driver, error) { var Init = func(home string) (graphdriver.Driver, error) {
@ -35,7 +33,6 @@ var Init = func(home string) (graphdriver.Driver, error) {
d := &Driver{ d := &Driver{
DeviceSet: deviceSet, DeviceSet: deviceSet,
home: home, home: home,
active: make(map[string]int),
} }
return d, nil return d, nil
} }
@ -83,55 +80,45 @@ func (d *Driver) Create(id, parent string) error {
return err return err
} }
// We float this reference so that the next Get call can
// steal it, so we don't have to unmount
if err := d.DeviceSet.UnmountDevice(id, UnmountFloat); err != nil {
return err
}
return nil return nil
} }
func (d *Driver) Remove(id string) error { func (d *Driver) Remove(id string) error {
// Protect the d.active from concurrent access // Sink the float from create in case no Get() call was made
d.Lock() if err := d.DeviceSet.UnmountDevice(id, UnmountSink); err != nil {
defer d.Unlock() return err
}
if d.active[id] != 0 { // This assumes the device has been properly Get/Put:ed and thus is unmounted
utils.Errorf("Warning: removing active id %s\n", id) if err := d.DeviceSet.DeleteDevice(id); err != nil {
return err
} }
mp := path.Join(d.home, "mnt", id) mp := path.Join(d.home, "mnt", id)
if err := d.unmount(id, mp); err != nil { if err := os.RemoveAll(mp); err != nil && !os.IsNotExist(err) {
return err return err
} }
return d.DeviceSet.RemoveDevice(id)
return nil
} }
func (d *Driver) Get(id string) (string, error) { func (d *Driver) Get(id string) (string, error) {
// Protect the d.active from concurrent access
d.Lock()
defer d.Unlock()
count := d.active[id]
mp := path.Join(d.home, "mnt", id) mp := path.Join(d.home, "mnt", id)
if count == 0 {
if err := d.mount(id, mp); err != nil { if err := d.mount(id, mp); err != nil {
return "", err return "", err
} }
}
d.active[id] = count + 1
return path.Join(mp, "rootfs"), nil return path.Join(mp, "rootfs"), nil
} }
func (d *Driver) Put(id string) { func (d *Driver) Put(id string) {
// Protect the d.active from concurrent access if err := d.DeviceSet.UnmountDevice(id, UnmountRegular); err != nil {
d.Lock() utils.Errorf("Warning: error unmounting device %s: %s\n", id, err)
defer d.Unlock()
if count := d.active[id]; count > 1 {
d.active[id] = count - 1
} else {
mp := path.Join(d.home, "mnt", id)
d.unmount(id, mp)
delete(d.active, id)
} }
} }
@ -140,25 +127,8 @@ func (d *Driver) mount(id, mountPoint string) error {
if err := osMkdirAll(mountPoint, 0755); err != nil && !osIsExist(err) { if err := osMkdirAll(mountPoint, 0755); err != nil && !osIsExist(err) {
return err return err
} }
// If mountpoint is already mounted, do nothing
if mounted, err := Mounted(mountPoint); err != nil {
return fmt.Errorf("Error checking mountpoint: %s", err)
} else if mounted {
return nil
}
// Mount the device // Mount the device
return d.DeviceSet.MountDevice(id, mountPoint, false) return d.DeviceSet.MountDevice(id, mountPoint)
}
func (d *Driver) unmount(id, mountPoint string) error {
// If mountpoint is not mounted, do nothing
if mounted, err := Mounted(mountPoint); err != nil {
return fmt.Errorf("Error checking mountpoint: %s", err)
} else if !mounted {
return nil
}
// Unmount the device
return d.DeviceSet.UnmountDevice(id, mountPoint, true)
} }
func (d *Driver) Exists(id string) bool { func (d *Driver) Exists(id string) bool {

View File

@ -136,7 +136,12 @@ type Set map[string]bool
func (r Set) Assert(t *testing.T, names ...string) { func (r Set) Assert(t *testing.T, names ...string) {
for _, key := range names { for _, key := range names {
if _, exists := r[key]; !exists { required := true
if strings.HasPrefix(key, "?") {
key = key[1:]
required = false
}
if _, exists := r[key]; !exists && required {
t.Fatalf("Key not set: %s", key) t.Fatalf("Key not set: %s", key)
} }
delete(r, key) delete(r, key)
@ -486,6 +491,7 @@ func TestDriverCreate(t *testing.T) {
"ioctl.blkgetsize", "ioctl.blkgetsize",
"ioctl.loopsetfd", "ioctl.loopsetfd",
"ioctl.loopsetstatus", "ioctl.loopsetstatus",
"?ioctl.loopctlgetfree",
) )
if err := d.Create("1", ""); err != nil { if err := d.Create("1", ""); err != nil {
@ -495,7 +501,6 @@ func TestDriverCreate(t *testing.T) {
"DmTaskCreate", "DmTaskCreate",
"DmTaskGetInfo", "DmTaskGetInfo",
"sysMount", "sysMount",
"Mounted",
"DmTaskRun", "DmTaskRun",
"DmTaskSetTarget", "DmTaskSetTarget",
"DmTaskSetSector", "DmTaskSetSector",
@ -604,6 +609,7 @@ func TestDriverRemove(t *testing.T) {
"ioctl.blkgetsize", "ioctl.blkgetsize",
"ioctl.loopsetfd", "ioctl.loopsetfd",
"ioctl.loopsetstatus", "ioctl.loopsetstatus",
"?ioctl.loopctlgetfree",
) )
if err := d.Create("1", ""); err != nil { if err := d.Create("1", ""); err != nil {
@ -614,7 +620,6 @@ func TestDriverRemove(t *testing.T) {
"DmTaskCreate", "DmTaskCreate",
"DmTaskGetInfo", "DmTaskGetInfo",
"sysMount", "sysMount",
"Mounted",
"DmTaskRun", "DmTaskRun",
"DmTaskSetTarget", "DmTaskSetTarget",
"DmTaskSetSector", "DmTaskSetSector",
@ -645,7 +650,6 @@ func TestDriverRemove(t *testing.T) {
"DmTaskSetTarget", "DmTaskSetTarget",
"DmTaskSetAddNode", "DmTaskSetAddNode",
"DmUdevWait", "DmUdevWait",
"Mounted",
"sysUnmount", "sysUnmount",
) )
}() }()

View File

@ -28,7 +28,7 @@ type Driver interface {
type Differ interface { type Differ interface {
Diff(id string) (archive.Archive, error) Diff(id string) (archive.Archive, error)
Changes(id string) ([]archive.Change, error) Changes(id string) ([]archive.Change, error)
ApplyDiff(id string, diff archive.Archive) error ApplyDiff(id string, diff archive.ArchiveReader) error
DiffSize(id string) (bytes int64, err error) DiffSize(id string) (bytes int64, err error)
} }

View File

@ -1,56 +0,0 @@
docker-ci
=========
docker-ci is our buildbot continuous integration server,
building and testing docker, hosted on EC2 and reachable at
http://docker-ci.dotcloud.com
Deployment
==========
# Load AWS credentials
export AWS_ACCESS_KEY_ID=''
export AWS_SECRET_ACCESS_KEY=''
export AWS_KEYPAIR_NAME=''
export AWS_SSH_PRIVKEY=''
# Load buildbot credentials and config
export BUILDBOT_PWD=''
export IRC_PWD=''
export IRC_CHANNEL='docker-dev'
export SMTP_USER=''
export SMTP_PWD=''
export EMAIL_RCP=''
# Load registry test credentials
export REGISTRY_USER=''
export REGISTRY_PWD=''
cd docker/testing
vagrant up --provider=aws
github pull request
===================
The entire docker pull request test workflow is event driven by github. Its
usage is fully automatic and the results are logged in docker-ci.dotcloud.com
Each time there is a pull request on docker's github project, github connects
to docker-ci using github's rest API documented in http://developer.github.com/v3/repos/hooks
The issued command to program github's notification PR event was:
curl -u GITHUB_USER:GITHUB_PASSWORD -d '{"name":"web","active":true,"events":["pull_request"],"config":{"url":"http://docker-ci.dotcloud.com:8011/change_hook/github?project=docker"}}' https://api.github.com/repos/dotcloud/docker/hooks
buildbot (0.8.7p1) was patched using ./testing/buildbot/github.py, so it
can understand the PR data github sends to it. Originally PR #1603 (ee64e099e0)
implemented this capability. Also we added a new scheduler to exclusively filter
PRs. and the 'pullrequest' builder to rebase the PR on top of master and test it.
nighthly release
================
The nightly release process is done by buildbot, running a DinD container that downloads
the docker repository and builds the release container. The resulting docker
binary is then tested, and if everything is fine, the release is done.

View File

@ -1,47 +1,29 @@
# VERSION: 0.25 # DOCKER-VERSION: 0.7.6
# DOCKER-VERSION 0.6.6 # AUTHOR: Daniel Mizyrycki <daniel@dotcloud.com>
# AUTHOR: Daniel Mizyrycki <daniel@docker.com> # DESCRIPTION: docker-ci continuous integration service
# DESCRIPTION: Deploy docker-ci on Digital Ocean # TO_BUILD: docker build -rm -t docker-ci/docker-ci .
# COMMENTS: # TO_RUN: docker run -rm -i -t -p 8000:80 -p 2222:22 -v /run:/var/socket \
# CONFIG_JSON is an environment variable json string loaded as: # -v /data/docker-ci:/data/docker-ci docker-ci/docker-ci
#
# export CONFIG_JSON='
# { "DROPLET_NAME": "docker-ci",
# "DO_CLIENT_ID": "Digital_Ocean_client_id",
# "DO_API_KEY": "Digital_Ocean_api_key",
# "DOCKER_KEY_ID": "Digital_Ocean_ssh_key_id",
# "DOCKER_CI_KEY_PATH": "docker-ci_private_key_path",
# "DOCKER_CI_PUB": "$(cat docker-ci_ssh_public_key.pub)",
# "DOCKER_CI_KEY": "$(cat docker-ci_ssh_private_key.key)",
# "BUILDBOT_PWD": "Buildbot_server_password",
# "IRC_PWD": "Buildbot_IRC_password",
# "SMTP_USER": "SMTP_server_user",
# "SMTP_PWD": "SMTP_server_password",
# "PKG_ACCESS_KEY": "Docker_release_S3_bucket_access_key",
# "PKG_SECRET_KEY": "Docker_release_S3_bucket_secret_key",
# "PKG_GPG_PASSPHRASE": "Docker_release_gpg_passphrase",
# "INDEX_AUTH": "Index_encripted_user_password",
# "REGISTRY_USER": "Registry_test_user",
# "REGISTRY_PWD": "Registry_test_password",
# "REGISTRY_BUCKET": "Registry_S3_bucket_name",
# "REGISTRY_ACCESS_KEY": "Registry_S3_bucket_access_key",
# "REGISTRY_SECRET_KEY": "Registry_S3_bucket_secret_key",
# "IRC_CHANNEL": "Buildbot_IRC_channel",
# "EMAIL_RCP": "Buildbot_mailing_receipient" }'
#
#
# TO_BUILD: docker build -t docker-ci .
# TO_DEPLOY: docker run -e CONFIG_JSON="${CONFIG_JSON}" docker-ci
from ubuntu:12.04 from ubuntu:12.04
maintainer Daniel Mizyrycki <daniel@dotcloud.com>
run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' \ ENV DEBIAN_FRONTEND noninteractive
> /etc/apt/sources.list RUN echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > \
run apt-get update; apt-get install -y git python2.7 python-dev libevent-dev \ /etc/apt/sources.list; apt-get update
python-pip ssh rsync less vim RUN apt-get install -y --no-install-recommends python2.7 python-dev \
run pip install requests fabric libevent-dev git supervisor ssh rsync less vim sudo gcc wget nginx
RUN cd /tmp; wget http://python-distribute.org/distribute_setup.py
RUN cd /tmp; python distribute_setup.py; easy_install pip; rm distribute_setup.py
# Add deployment code and set default container command RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
add . /docker-ci RUN echo 'deb http://get.docker.io/ubuntu docker main' > \
cmd "/docker-ci/deployment.py" /etc/apt/sources.list.d/docker.list; apt-get update
RUN apt-get install -y lxc-docker-0.8.0
RUN pip install SQLAlchemy==0.7.10 buildbot buildbot-slave pyopenssl boto
RUN ln -s /var/socket/docker.sock /run/docker.sock
ADD . /docker-ci
RUN /docker-ci/setup.sh
ENTRYPOINT ["supervisord", "-n"]

View File

@ -1,26 +1,65 @@
======= =========
testing docker-ci
======= =========
This directory contains docker-ci testing related files. This directory contains docker-ci continuous integration system.
As expected, it is a fully dockerized and deployed using
docker-container-runner.
docker-ci is based on Buildbot, a continuous integration system designed
to automate the build/test cycle. By automatically rebuilding and testing
the tree each time something has changed, build problems are pinpointed
quickly, before other developers are inconvenienced by the failure.
We are running buildbot at Rackspace to verify docker and docker-registry
pass tests, and check for coverage code details.
docker-ci instance is at https://docker-ci.docker.io/waterfall
Inside docker-ci container we have the following directory structure:
/docker-ci source code of docker-ci
/data/backup/docker-ci/ daily backup (replicated over S3)
/data/docker-ci/coverage/{docker,docker-registry}/ mapped to host volumes
/data/buildbot/{master,slave}/ main docker-ci buildbot config and database
/var/socket/{docker.sock} host volume access to docker socket
Buildbot Production deployment
======== =====================
Buildbot is a continuous integration system designed to automate the ::
build/test cycle. By automatically rebuilding and testing the tree each time
something has changed, build problems are pinpointed quickly, before other
developers are inconvenienced by the failure.
We are running buildbot in Amazon's EC2 to verify docker passes all # Clone docker-ci repository
tests when commits get pushed to the master branch and building git clone https://github.com/dotcloud/docker
nightly releases using Docker in Docker awesome implementation made cd docker/hack/infrastructure/docker-ci
by Jerome Petazzoni.
https://github.com/jpetazzo/dind export DOCKER_PROD=[PRODUCTION_SERVER_IP]
Docker's buildbot instance is at http://docker-ci.dotcloud.com/waterfall # Create data host volume. (only once)
docker -H $DOCKER_PROD run -v /home:/data ubuntu:12.04 \
mkdir -p /data/docker-ci/coverage/docker
docker -H $DOCKER_PROD run -v /home:/data ubuntu:12.04 \
mkdir -p /data/docker-ci/coverage/docker-registry
docker -H $DOCKER_PROD run -v /home:/data ubuntu:12.04 \
chown -R 1000.1000 /data/docker-ci
For deployment instructions, please take a look at # dcr deployment. Define credentials and special environment dcr variables
hack/infrastructure/docker-ci/Dockerfile # ( retrieved at /hack/infrastructure/docker-ci/dcr/prod/docker-ci.yml )
export WEB_USER=[DOCKER-CI-WEBSITE-USERNAME]
export WEB_IRC_PWD=[DOCKER-CI-WEBSITE-PASSWORD]
export BUILDBOT_PWD=[BUILDSLAVE_PASSWORD]
export AWS_ACCESS_KEY=[DOCKER_RELEASE_S3_ACCESS]
export AWS_SECRET_KEY=[DOCKER_RELEASE_S3_SECRET]
export GPG_PASSPHRASE=[DOCKER_RELEASE_PASSPHRASE]
export BACKUP_AWS_ID=[S3_BUCKET_CREDENTIAL_ACCESS]
export BACKUP_AWS_SECRET=[S3_BUCKET_CREDENTIAL_SECRET]
export SMTP_USER=[MAILGUN_SMTP_USERNAME]
export SMTP_PWD=[MAILGUN_SMTP_PASSWORD]
export EMAIL_RCP=[EMAIL_FOR_BUILD_ERRORS]
# Build docker-ci and testbuilder docker images
docker -H $DOCKER_PROD build -rm -t docker-ci/docker-ci .
(cd testbuilder; docker -H $DOCKER_PROD build -rm -t docker-ci/testbuilder .)
# Run docker-ci container ( assuming no previous container running )
(cd dcr/prod; dcr docker-ci.yml start)
(cd dcr/prod; dcr docker-ci.yml register docker-ci.docker.io)

View File

@ -1 +1 @@
0.4.5 0.5.6

View File

@ -1 +0,0 @@
Buildbot configuration and setup files

View File

@ -1,18 +0,0 @@
[program:buildmaster]
command=twistd --nodaemon --no_save -y buildbot.tac
directory=/data/buildbot/master
chown= root:root
redirect_stderr=true
stdout_logfile=/var/log/supervisor/buildbot-master.log
stderr_logfile=/var/log/supervisor/buildbot-master.log
[program:buildworker]
command=twistd --nodaemon --no_save -y buildbot.tac
directory=/data/buildbot/slave
chown= root:root
redirect_stderr=true
stdout_logfile=/var/log/supervisor/buildbot-slave.log
stderr_logfile=/var/log/supervisor/buildbot-slave.log
[group:buildbot]
programs=buildmaster,buildworker

View File

@ -88,7 +88,8 @@ def getChanges(request, options = None):
payload = json.loads(request.args['payload'][0]) payload = json.loads(request.args['payload'][0])
import urllib,datetime import urllib,datetime
fname = str(datetime.datetime.now()).replace(' ','_').replace(':','-')[:19] fname = str(datetime.datetime.now()).replace(' ','_').replace(':','-')[:19]
open('github_{0}.json'.format(fname),'w').write(json.dumps(json.loads(urllib.unquote(request.args['payload'][0])), sort_keys = True, indent = 2)) # Github event debug
# open('github_{0}.json'.format(fname),'w').write(json.dumps(json.loads(urllib.unquote(request.args['payload'][0])), sort_keys = True, indent = 2))
if 'pull_request' in payload: if 'pull_request' in payload:
user = payload['pull_request']['user']['login'] user = payload['pull_request']['user']['login']

View File

@ -1,4 +1,4 @@
import os import os, re
from buildbot.buildslave import BuildSlave from buildbot.buildslave import BuildSlave
from buildbot.schedulers.forcesched import ForceScheduler from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.schedulers.basic import SingleBranchScheduler from buildbot.schedulers.basic import SingleBranchScheduler
@ -6,127 +6,156 @@ from buildbot.schedulers.timed import Nightly
from buildbot.changes import filter from buildbot.changes import filter
from buildbot.config import BuilderConfig from buildbot.config import BuilderConfig
from buildbot.process.factory import BuildFactory from buildbot.process.factory import BuildFactory
from buildbot.process.properties import Interpolate from buildbot.process.properties import Property
from buildbot.steps.shell import ShellCommand from buildbot.steps.shell import ShellCommand
from buildbot.status import html, words from buildbot.status import html, words
from buildbot.status.web import authz, auth from buildbot.status.web import authz, auth
from buildbot.status.mail import MailNotifier from buildbot.status.mail import MailNotifier
PORT_WEB = 80 # Buildbot webserver port
def ENV(x):
'''Promote an environment variable for global use returning its value'''
retval = os.environ.get(x, '')
globals()[x] = retval
return retval
class TestCommand(ShellCommand):
'''Extend ShellCommand with optional summary logs'''
def __init__(self, *args, **kwargs):
super(TestCommand, self).__init__(*args, **kwargs)
def createSummary(self, log):
exit_status = re.sub(r'.+\n\+ exit (\d+).+',
r'\1', log.getText()[-100:], flags=re.DOTALL)
if exit_status != '0':
return
# Infer coverage path from log
if '+ COVERAGE_PATH' in log.getText():
path = re.sub(r'.+\+ COVERAGE_PATH=((.+?)-\d+).+',
r'\2/\1', log.getText(), flags=re.DOTALL)
url = '{}coverage/{}/index.html'.format(c['buildbotURL'], path)
self.addURL('coverage', url)
elif 'COVERAGE_FILE' in log.getText():
path = re.sub(r'.+\+ COVERAGE_FILE=((.+?)-\d+).+',
r'\2/\1', log.getText(), flags=re.DOTALL)
url = '{}coverage/{}/index.html'.format(c['buildbotURL'], path)
self.addURL('coverage', url)
PORT_WEB = 8000 # Buildbot webserver port
PORT_GITHUB = 8011 # Buildbot github hook port PORT_GITHUB = 8011 # Buildbot github hook port
PORT_MASTER = 9989 # Port where buildbot master listen buildworkers PORT_MASTER = 9989 # Port where buildbot master listen buildworkers
TEST_USER = 'buildbot' # Credential to authenticate build triggers
TEST_PWD = 'docker' # Credential to authenticate build triggers BUILDBOT_URL = '//localhost:{}/'.format(PORT_WEB)
GITHUB_DOCKER = 'github.com/dotcloud/docker' DOCKER_REPO = 'https://github.com/docker-test/docker'
BUILDBOT_PATH = '/data/buildbot' DOCKER_TEST_ARGV = 'HEAD {}'.format(DOCKER_REPO)
DOCKER_PATH = '/go/src/github.com/dotcloud/docker' REGISTRY_REPO = 'https://github.com/docker-test/docker-registry'
DOCKER_CI_PATH = '/docker-ci' REGISTRY_TEST_ARGV = 'HEAD {}'.format(REGISTRY_REPO)
if ENV('DEPLOYMENT') == 'staging':
BUILDBOT_URL = "//docker-ci-stage.docker.io/"
if ENV('DEPLOYMENT') == 'production':
BUILDBOT_URL = '//docker-ci.docker.io/'
DOCKER_REPO = 'https://github.com/dotcloud/docker'
DOCKER_TEST_ARGV = ''
REGISTRY_REPO = 'https://github.com/dotcloud/docker-registry'
REGISTRY_TEST_ARGV = ''
# Credentials set by setup.sh from deployment.py # Credentials set by setup.sh from deployment.py
BUILDBOT_PWD = '' ENV('WEB_USER')
IRC_PWD = '' ENV('WEB_IRC_PWD')
IRC_CHANNEL = '' ENV('BUILDBOT_PWD')
SMTP_USER = '' ENV('SMTP_USER')
SMTP_PWD = '' ENV('SMTP_PWD')
EMAIL_RCP = '' ENV('EMAIL_RCP')
ENV('IRC_CHANNEL')
c = BuildmasterConfig = {} c = BuildmasterConfig = {}
c['title'] = "Docker" c['title'] = "docker-ci"
c['titleURL'] = "waterfall" c['titleURL'] = "waterfall"
c['buildbotURL'] = "http://docker-ci.dotcloud.com/" c['buildbotURL'] = BUILDBOT_URL
c['db'] = {'db_url':"sqlite:///state.sqlite"} c['db'] = {'db_url':"sqlite:///state.sqlite"}
c['slaves'] = [BuildSlave('buildworker', BUILDBOT_PWD)] c['slaves'] = [BuildSlave('buildworker', BUILDBOT_PWD)]
c['slavePortnum'] = PORT_MASTER c['slavePortnum'] = PORT_MASTER
# Schedulers # Schedulers
c['schedulers'] = [ForceScheduler(name='trigger', builderNames=['docker', c['schedulers'] = [ForceScheduler(name='trigger', builderNames=[
'index','registry','docker-coverage','registry-coverage','nightlyrelease'])] 'docker', 'docker-registry', 'nightlyrelease', 'backup'])]
c['schedulers'] += [SingleBranchScheduler(name="all", treeStableTimer=None, c['schedulers'] += [SingleBranchScheduler(name="docker", treeStableTimer=None,
change_filter=filter.ChangeFilter(branch='master', change_filter=filter.ChangeFilter(branch='master',
repository='https://github.com/dotcloud/docker'), builderNames=['docker'])] repository=DOCKER_REPO), builderNames=['docker'])]
c['schedulers'] += [SingleBranchScheduler(name='pullrequest', c['schedulers'] += [SingleBranchScheduler(name="registry", treeStableTimer=None,
change_filter=filter.ChangeFilter(category='github_pullrequest'), treeStableTimer=None, change_filter=filter.ChangeFilter(branch='master',
builderNames=['pullrequest'])] repository=REGISTRY_REPO), builderNames=['docker-registry'])]
c['schedulers'] += [Nightly(name='daily', branch=None, builderNames=['nightlyrelease', c['schedulers'] += [SingleBranchScheduler(name='docker-pr', treeStableTimer=None,
'docker-coverage','registry-coverage'], hour=7, minute=00)] change_filter=filter.ChangeFilter(category='github_pullrequest',
c['schedulers'] += [Nightly(name='every4hrs', branch=None, builderNames=['registry','index'], project='docker'), builderNames=['docker-pr'])]
hour=range(0,24,4), minute=15)] c['schedulers'] += [SingleBranchScheduler(name='docker-registry-pr', treeStableTimer=None,
change_filter=filter.ChangeFilter(category='github_pullrequest',
project='docker-registry'), builderNames=['docker-registry-pr'])]
c['schedulers'] += [Nightly(name='daily', branch=None, builderNames=[
'nightlyrelease', 'backup'], hour=7, minute=00)]
# Builders # Builders
# Docker commit test
test_cmd = ('docker run -privileged mzdaniel/test_docker hack/dind' # Backup
' test_docker.sh %(src::revision)s')
factory = BuildFactory() factory = BuildFactory()
factory.addStep(ShellCommand(description='Docker', logEnviron=False, factory.addStep(TestCommand(description='backup', logEnviron=False,
usePTY=True, command=["sh", "-c", Interpolate(test_cmd)])) usePTY=True, command='/docker-ci/tool/backup.py'))
c['builders'] = [BuilderConfig(name='docker',slavenames=['buildworker'], c['builders'] = [BuilderConfig(name='backup',slavenames=['buildworker'],
factory=factory)]
# Docker test
factory = BuildFactory()
factory.addStep(TestCommand(description='docker', logEnviron=False,
usePTY=True, command='/docker-ci/dockertest/docker {}'.format(DOCKER_TEST_ARGV)))
c['builders'] += [BuilderConfig(name='docker',slavenames=['buildworker'],
factory=factory)] factory=factory)]
# Docker pull request test # Docker pull request test
test_cmd = ('docker run -privileged mzdaniel/test_docker hack/dind'
' test_docker.sh %(src::revision)s %(src::repository)s %(src::branch)s')
factory = BuildFactory() factory = BuildFactory()
factory.addStep(ShellCommand(description='pull_request', logEnviron=False, factory.addStep(TestCommand(description='docker-pr', logEnviron=False,
usePTY=True, command=["sh", "-c", Interpolate(test_cmd)])) usePTY=True, command=['/docker-ci/dockertest/docker',
c['builders'] += [BuilderConfig(name='pullrequest',slavenames=['buildworker'], Property('revision'), Property('repository'), Property('branch')]))
c['builders'] += [BuilderConfig(name='docker-pr',slavenames=['buildworker'],
factory=factory)] factory=factory)]
# Docker coverage test # docker-registry test
factory = BuildFactory() factory = BuildFactory()
factory.addStep(ShellCommand(description='docker-coverage', logEnviron=False, factory.addStep(TestCommand(description='docker-registry', logEnviron=False,
usePTY=True, command='{0}/docker-coverage/coverage-docker.sh'.format( usePTY=True, command='/docker-ci/dockertest/docker-registry {}'.format(REGISTRY_TEST_ARGV)))
DOCKER_CI_PATH))) c['builders'] += [BuilderConfig(name='docker-registry',slavenames=['buildworker'],
c['builders'] += [BuilderConfig(name='docker-coverage',slavenames=['buildworker'],
factory=factory)] factory=factory)]
# Docker registry coverage test # Docker registry pull request test
factory = BuildFactory() factory = BuildFactory()
factory.addStep(ShellCommand(description='registry-coverage', logEnviron=False, factory.addStep(TestCommand(description='docker-registry-pr', logEnviron=False,
usePTY=True, command='docker run registry_coverage'.format( usePTY=True, command=['/docker-ci/dockertest/docker-registry',
DOCKER_CI_PATH))) Property('revision'), Property('repository'), Property('branch')]))
c['builders'] += [BuilderConfig(name='registry-coverage',slavenames=['buildworker'], c['builders'] += [BuilderConfig(name='docker-registry-pr',slavenames=['buildworker'],
factory=factory)]
# Registry functional test
factory = BuildFactory()
factory.addStep(ShellCommand(description='registry', logEnviron=False,
command='. {0}/master/credentials.cfg; '
'{1}/functionaltests/test_registry.sh'.format(BUILDBOT_PATH, DOCKER_CI_PATH),
usePTY=True))
c['builders'] += [BuilderConfig(name='registry',slavenames=['buildworker'],
factory=factory)]
# Index functional test
factory = BuildFactory()
factory.addStep(ShellCommand(description='index', logEnviron=False,
command='. {0}/master/credentials.cfg; '
'{1}/functionaltests/test_index.py'.format(BUILDBOT_PATH, DOCKER_CI_PATH),
usePTY=True))
c['builders'] += [BuilderConfig(name='index',slavenames=['buildworker'],
factory=factory)] factory=factory)]
# Docker nightly release # Docker nightly release
nightlyrelease_cmd = ('docker version; docker run -i -t -privileged -e AWS_S3_BUCKET='
'test.docker.io dockerbuilder hack/dind dockerbuild.sh')
factory = BuildFactory() factory = BuildFactory()
factory.addStep(ShellCommand(description='NightlyRelease',logEnviron=False, factory.addStep(ShellCommand(description='NightlyRelease',logEnviron=False,
usePTY=True, command=nightlyrelease_cmd)) usePTY=True, command=['/docker-ci/dockertest/nightlyrelease']))
c['builders'] += [BuilderConfig(name='nightlyrelease',slavenames=['buildworker'], c['builders'] += [BuilderConfig(name='nightlyrelease',slavenames=['buildworker'],
factory=factory)] factory=factory)]
# Status # Status
authz_cfg = authz.Authz(auth=auth.BasicAuth([(TEST_USER, TEST_PWD)]), authz_cfg = authz.Authz(auth=auth.BasicAuth([(WEB_USER, WEB_IRC_PWD)]),
forceBuild='auth') forceBuild='auth')
c['status'] = [html.WebStatus(http_port=PORT_WEB, authz=authz_cfg)] c['status'] = [html.WebStatus(http_port=PORT_WEB, authz=authz_cfg)]
c['status'].append(html.WebStatus(http_port=PORT_GITHUB, allowForce=True, c['status'].append(html.WebStatus(http_port=PORT_GITHUB, allowForce=True,
change_hook_dialects={ 'github': True })) change_hook_dialects={ 'github': True }))
c['status'].append(MailNotifier(fromaddr='buildbot@docker.io', c['status'].append(MailNotifier(fromaddr='docker-test@docker.io',
sendToInterestedUsers=False, extraRecipients=[EMAIL_RCP], sendToInterestedUsers=False, extraRecipients=[EMAIL_RCP],
mode='failing', relayhost='smtp.mailgun.org', smtpPort=587, useTls=True, mode='failing', relayhost='smtp.mailgun.org', smtpPort=587, useTls=True,
smtpUser=SMTP_USER, smtpPassword=SMTP_PWD)) smtpUser=SMTP_USER, smtpPassword=SMTP_PWD))
c['status'].append(words.IRC("irc.freenode.net", "dockerqabot", c['status'].append(words.IRC("irc.freenode.net", "dockerqabot",
channels=[IRC_CHANNEL], password=IRC_PWD, allowForce=True, channels=[IRC_CHANNEL], password=WEB_IRC_PWD, allowForce=True,
notify_events={'exception':1, 'successToFailure':1, 'failureToSuccess':1})) notify_events={'exception':1, 'successToFailure':1, 'failureToSuccess':1}))

View File

@ -1,9 +0,0 @@
sqlalchemy<=0.7.9
sqlalchemy-migrate>=0.7.2
buildbot==0.8.7p1
buildbot_slave==0.8.7p1
nose==1.2.1
requests==1.1.0
flask==0.10.1
simplejson==2.3.2
selenium==2.35.0

View File

@ -1,59 +0,0 @@
#!/usr/bin/env bash
# Setup of buildbot configuration. Package installation is being done by
# Vagrantfile
# Dependencies: buildbot, buildbot-slave, supervisor
USER=$1
CFG_PATH=$2
DOCKER_PATH=$3
BUILDBOT_PWD=$4
IRC_PWD=$5
IRC_CHANNEL=$6
SMTP_USER=$7
SMTP_PWD=$8
EMAIL_RCP=$9
REGISTRY_USER=${10}
REGISTRY_PWD=${11}
REGISTRY_BUCKET=${12}
REGISTRY_ACCESS_KEY=${13}
REGISTRY_SECRET_KEY=${14}
BUILDBOT_PATH="/data/buildbot"
SLAVE_NAME="buildworker"
SLAVE_SOCKET="localhost:9989"
export PATH="/bin:sbin:/usr/bin:/usr/sbin:/usr/local/bin"
function run { su $USER -c "$1"; }
# Exit if buildbot has already been installed
[ -d "$BUILDBOT_PATH" ] && exit 0
# Setup buildbot
run "mkdir -p $BUILDBOT_PATH"
cd $BUILDBOT_PATH
run "buildbot create-master master"
run "cp $CFG_PATH/master.cfg master"
run "sed -i -E 's#(BUILDBOT_PWD = ).+#\1\"$BUILDBOT_PWD\"#' master/master.cfg"
run "sed -i -E 's#(IRC_PWD = ).+#\1\"$IRC_PWD\"#' master/master.cfg"
run "sed -i -E 's#(IRC_CHANNEL = ).+#\1\"$IRC_CHANNEL\"#' master/master.cfg"
run "sed -i -E 's#(SMTP_USER = ).+#\1\"$SMTP_USER\"#' master/master.cfg"
run "sed -i -E 's#(SMTP_PWD = ).+#\1\"$SMTP_PWD\"#' master/master.cfg"
run "sed -i -E 's#(EMAIL_RCP = ).+#\1\"$EMAIL_RCP\"#' master/master.cfg"
run "buildslave create-slave slave $SLAVE_SOCKET $SLAVE_NAME $BUILDBOT_PWD"
run "echo 'export DOCKER_CREDS=\"$REGISTRY_USER:$REGISTRY_PWD\"' > $BUILDBOT_PATH/master/credentials.cfg"
run "echo 'export S3_BUCKET=\"$REGISTRY_BUCKET\"' >> $BUILDBOT_PATH/master/credentials.cfg"
run "echo 'export S3_ACCESS_KEY=\"$REGISTRY_ACCESS_KEY\"' >> $BUILDBOT_PATH/master/credentials.cfg"
run "echo 'export S3_SECRET_KEY=\"$REGISTRY_SECRET_KEY\"' >> $BUILDBOT_PATH/master/credentials.cfg"
# Patch github webstatus to capture pull requests
cp $CFG_PATH/github.py /usr/local/lib/python2.7/dist-packages/buildbot/status/web/hooks
# Allow buildbot subprocesses (docker tests) to properly run in containers,
# in particular with docker -u
run "sed -i 's/^umask = None/umask = 000/' slave/buildbot.tac"
# Setup supervisor
cp $CFG_PATH/buildbot.conf /etc/supervisor/conf.d/buildbot.conf
sed -i -E "s/^chmod=0700.+/chmod=0770\nchown=root:$USER/" /etc/supervisor/supervisord.conf
kill -HUP $(pgrep -f "/usr/bin/python /usr/bin/supervisord")

View File

@ -0,0 +1,22 @@
docker-ci:
image: "docker-ci/docker-ci"
release_name: "docker-ci-0.5.6"
ports: ["80","2222:22","8011:8011"]
register: "80"
volumes: ["/run:/var/socket","/home/docker-ci:/data/docker-ci"]
command: []
env:
- "DEPLOYMENT=production"
- "IRC_CHANNEL=docker-testing"
- "BACKUP_BUCKET=backup-ci"
- "$WEB_USER"
- "$WEB_IRC_PWD"
- "$BUILDBOT_PWD"
- "$AWS_ACCESS_KEY"
- "$AWS_SECRET_KEY"
- "$GPG_PASSPHRASE"
- "$BACKUP_AWS_ID"
- "$BACKUP_AWS_SECRET"
- "$SMTP_USER"
- "$SMTP_PWD"
- "$EMAIL_RCP"

View File

@ -0,0 +1,5 @@
default:
hipaches: ['192.168.100.67:6379']
daemons: ['192.168.100.67:4243']
use_ssh: False

View File

@ -0,0 +1,22 @@
docker-ci:
image: "docker-ci/docker-ci"
release_name: "docker-ci-stage"
ports: ["80","2222:22","8011:8011"]
register: "80"
volumes: ["/run:/var/socket","/home/docker-ci:/data/docker-ci"]
command: []
env:
- "DEPLOYMENT=staging"
- "IRC_CHANNEL=docker-testing-staging"
- "BACKUP_BUCKET=ci-backup-stage"
- "$BACKUP_AWS_ID"
- "$BACKUP_AWS_SECRET"
- "$WEB_USER"
- "$WEB_IRC_PWD"
- "$BUILDBOT_PWD"
- "$AWS_ACCESS_KEY"
- "$AWS_SECRET_KEY"
- "$GPG_PASSPHRASE"
- "$SMTP_USER"
- "$SMTP_PWD"
- "$EMAIL_RCP"

View File

@ -0,0 +1,5 @@
default:
hipaches: ['192.168.100.65:6379']
daemons: ['192.168.100.65:4243']
use_ssh: False

View File

@ -1,171 +0,0 @@
#!/usr/bin/env python
import os, sys, re, json, requests, base64
from subprocess import call
from fabric import api
from fabric.api import cd, run, put, sudo
from os import environ as env
from datetime import datetime
from time import sleep
# Remove SSH private key as it needs more processing
CONFIG = json.loads(re.sub(r'("DOCKER_CI_KEY".+?"(.+?)",)','',
env['CONFIG_JSON'], flags=re.DOTALL))
# Populate environment variables
for key in CONFIG:
env[key] = CONFIG[key]
# Load SSH private key
env['DOCKER_CI_KEY'] = re.sub('^.+"DOCKER_CI_KEY".+?"(.+?)".+','\\1',
env['CONFIG_JSON'],flags=re.DOTALL)
DROPLET_NAME = env.get('DROPLET_NAME','docker-ci')
TIMEOUT = 120 # Seconds before timeout droplet creation
IMAGE_ID = 1004145 # Docker on Ubuntu 13.04
REGION_ID = 4 # New York 2
SIZE_ID = 62 # memory 2GB
DO_IMAGE_USER = 'root' # Image user on Digital Ocean
API_URL = 'https://api.digitalocean.com/'
DOCKER_PATH = '/go/src/github.com/dotcloud/docker'
DOCKER_CI_PATH = '/docker-ci'
CFG_PATH = '{}/buildbot'.format(DOCKER_CI_PATH)
class DigitalOcean():
def __init__(self, key, client):
'''Set default API parameters'''
self.key = key
self.client = client
self.api_url = API_URL
def api(self, cmd_path, api_arg={}):
'''Make api call'''
api_arg.update({'api_key':self.key, 'client_id':self.client})
resp = requests.get(self.api_url + cmd_path, params=api_arg).text
resp = json.loads(resp)
if resp['status'] != 'OK':
raise Exception(resp['error_message'])
return resp
def droplet_data(self, name):
'''Get droplet data'''
data = self.api('droplets')
data = [droplet for droplet in data['droplets']
if droplet['name'] == name]
return data[0] if data else {}
def json_fmt(data):
'''Format json output'''
return json.dumps(data, sort_keys = True, indent = 2)
do = DigitalOcean(env['DO_API_KEY'], env['DO_CLIENT_ID'])
# Get DROPLET_NAME data
data = do.droplet_data(DROPLET_NAME)
# Stop processing if DROPLET_NAME exists on Digital Ocean
if data:
print ('Droplet: {} already deployed. Not further processing.'
.format(DROPLET_NAME))
exit(1)
# Create droplet
do.api('droplets/new', {'name':DROPLET_NAME, 'region_id':REGION_ID,
'image_id':IMAGE_ID, 'size_id':SIZE_ID,
'ssh_key_ids':[env['DOCKER_KEY_ID']]})
# Wait for droplet to be created.
start_time = datetime.now()
while (data.get('status','') != 'active' and (
datetime.now()-start_time).seconds < TIMEOUT):
data = do.droplet_data(DROPLET_NAME)
print data['status']
sleep(3)
# Wait for the machine to boot
sleep(15)
# Get droplet IP
ip = str(data['ip_address'])
print 'droplet: {} ip: {}'.format(DROPLET_NAME, ip)
# Create docker-ci ssh private key so docker-ci docker container can communicate
# with its EC2 instance
os.makedirs('/root/.ssh')
open('/root/.ssh/id_rsa','w').write(env['DOCKER_CI_KEY'])
os.chmod('/root/.ssh/id_rsa',0600)
open('/root/.ssh/config','w').write('StrictHostKeyChecking no\n')
api.env.host_string = ip
api.env.user = DO_IMAGE_USER
api.env.key_filename = '/root/.ssh/id_rsa'
# Correct timezone
sudo('echo "America/Los_Angeles" >/etc/timezone')
sudo('dpkg-reconfigure --frontend noninteractive tzdata')
# Load public docker-ci key
sudo("echo '{}' >> /root/.ssh/authorized_keys".format(env['DOCKER_CI_PUB']))
# Create docker nightly release credentials file
credentials = {
'AWS_ACCESS_KEY': env['PKG_ACCESS_KEY'],
'AWS_SECRET_KEY': env['PKG_SECRET_KEY'],
'GPG_PASSPHRASE': env['PKG_GPG_PASSPHRASE']}
open(DOCKER_CI_PATH + '/nightlyrelease/release_credentials.json', 'w').write(
base64.b64encode(json.dumps(credentials)))
# Transfer docker
sudo('mkdir -p ' + DOCKER_CI_PATH)
sudo('chown {}.{} {}'.format(DO_IMAGE_USER, DO_IMAGE_USER, DOCKER_CI_PATH))
call('/usr/bin/rsync -aH {} {}@{}:{}'.format(DOCKER_CI_PATH, DO_IMAGE_USER, ip,
os.path.dirname(DOCKER_CI_PATH)), shell=True)
# Install Docker and Buildbot dependencies
sudo('mkdir /mnt/docker; ln -s /mnt/docker /var/lib/docker')
sudo('apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9')
sudo('echo deb https://get.docker.io/ubuntu docker main >'
' /etc/apt/sources.list.d/docker.list')
sudo('echo -e "deb http://archive.ubuntu.com/ubuntu raring main universe\n'
'deb http://us.archive.ubuntu.com/ubuntu/ raring-security main universe\n"'
' > /etc/apt/sources.list; apt-get update')
sudo('DEBIAN_FRONTEND=noninteractive apt-get install -q -y wget python-dev'
' python-pip supervisor git mercurial linux-image-extra-$(uname -r)'
' aufs-tools make libfontconfig libevent-dev libsqlite3-dev libssl-dev')
sudo('wget -O - https://go.googlecode.com/files/go1.2.linux-amd64.tar.gz | '
'tar -v -C /usr/local -xz; ln -s /usr/local/go/bin/go /usr/bin/go')
sudo('GOPATH=/go go get -d github.com/dotcloud/docker')
sudo('pip install -r {}/requirements.txt'.format(CFG_PATH))
# Install docker and testing dependencies
sudo('apt-get install -y -q lxc-docker')
sudo('curl -s https://phantomjs.googlecode.com/files/'
'phantomjs-1.9.1-linux-x86_64.tar.bz2 | tar jx -C /usr/bin'
' --strip-components=2 phantomjs-1.9.1-linux-x86_64/bin/phantomjs')
# Build docker-ci containers
sudo('cd {}; docker build -t docker .'.format(DOCKER_PATH))
sudo('cd {}; docker build -t docker-ci .'.format(DOCKER_CI_PATH))
sudo('cd {}/nightlyrelease; docker build -t dockerbuilder .'.format(
DOCKER_CI_PATH))
sudo('cd {}/registry-coverage; docker build -t registry_coverage .'.format(
DOCKER_CI_PATH))
# Download docker-ci testing container
sudo('docker pull mzdaniel/test_docker')
# Setup buildbot
sudo('mkdir /data')
sudo('{0}/setup.sh root {0} {1} {2} {3} {4} {5} {6} {7} {8} {9} {10}'
' {11} {12}'.format(CFG_PATH, DOCKER_PATH, env['BUILDBOT_PWD'],
env['IRC_PWD'], env['IRC_CHANNEL'], env['SMTP_USER'],
env['SMTP_PWD'], env['EMAIL_RCP'], env['REGISTRY_USER'],
env['REGISTRY_PWD'], env['REGISTRY_BUCKET'], env['REGISTRY_ACCESS_KEY'],
env['REGISTRY_SECRET_KEY']))
# Preventively reboot docker-ci daily
sudo('ln -s /sbin/reboot /etc/cron.daily')

View File

@ -1,32 +0,0 @@
#!/usr/bin/env bash
set -x
# Generate a random string of $1 characters
function random {
cat /dev/urandom | tr -cd 'a-f0-9' | head -c $1
}
# Compute test paths
BASE_PATH=`pwd`/test_docker_$(random 12)
DOCKER_PATH=$BASE_PATH/go/src/github.com/dotcloud/docker
export GOPATH=$BASE_PATH/go:$DOCKER_PATH/vendor
# Fetch latest master
mkdir -p $DOCKER_PATH
cd $DOCKER_PATH
git init .
git fetch -q http://github.com/dotcloud/docker master
git reset --hard FETCH_HEAD
# Fetch go coverage
cd $BASE_PATH/go
GOPATH=$BASE_PATH/go go get github.com/axw/gocov/gocov
sudo -E GOPATH=$GOPATH ./bin/gocov test -deps -exclude-goroot -v\
-exclude github.com/gorilla/context,github.com/gorilla/mux,github.com/kr/pty,\
code.google.com/p/go.net/websocket\
github.com/dotcloud/docker | ./bin/gocov report; exit_status=$?
# Cleanup testing directory
rm -rf $BASE_PATH
exit $exit_status

Some files were not shown because too many files have changed in this diff Show More