This patch creates interfaces in builder/ for building Docker images.
It is a first step in a series of patches to remove the daemon
dependency on builder and later allow a client-side Dockerfile builder
as well as potential builder plugins.
It is needed because we cannot remove the /build API endpoint, so we
need to keep the server-side Dockerfile builder, but we also want to
reuse the same Dockerfile parser and evaluator for both server-side and
client-side.
builder/dockerfile/ and api/server/builder.go contain implementations
of those interfaces as a refactoring of the current code.
Signed-off-by: Tibor Vass <tibor@docker.com>
It prevents occupying of those resources (ports, unix-sockets) by
containers.
Also fixed false-positive test for that case.
Fix#15912
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Search terms shouldn't be restricted to only full valid repository
names. It should be perfectly valid to search using a part of a name,
even if it ends with a period, dash or underscore.
Signed-off-by: Hu Keping <hukeping@huawei.com>
Exec start was sending HTTP 500 for every error.
Fixed an error where pausing a container and then calling exec start
caused the daemon to freeze.
Updated API docs which incorrectly showed that a successful exec start
was an HTTP 201, in reality it is HTTP 200.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Although having a request ID available throughout the codebase is very
valuable, the impact of requiring a Context as an argument to every
function in the codepath of an API request, is too significant and was
not properly understood at the time of the review.
Furthermore, mixing API-layer code with non-API-layer code makes the
latter usable only by API-layer code (one that has a notion of Context).
This reverts commit de41640435, reversing
changes made to 7daeecd42d.
Signed-off-by: Tibor Vass <tibor@docker.com>
Conflicts:
api/server/container.go
builder/internals.go
daemon/container_unix.go
daemon/create.go
This test is failing once in a while on the CI, because the docker
attach command might be called after the container ends.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Fixes an issue where `VOLUME some_name:/foo` would be parsed as a named
volume, allowing access from the builder to any volume on the host.
This makes sure that named volumes must always be passed in as a bind.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Before this patch libcontainer badly errored out with `invalid
argument` or `numerical result out of range` while trying to write
to cpuset.cpus or cpuset.mems with an invalid value provided.
This patch adds validation to --cpuset-cpus and --cpuset-mems flag along with
validation based on system's available cpus/mems before starting a container.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
Add a daemon flag to control this behaviour. Add a warning message when pulling
an image from a v1 registry. The default order of pull is slightly altered
with this changset.
Previously it was:
https v2, https v1, http v2, http v1
now it is:
https v2, http v2, https v1, http v1
Prevent login to v1 registries by explicitly setting the version before ping to
prevent fallback to v1.
Add unit tests for v2 only mode. Create a mock server that can register
handlers for various endpoints. Assert no v1 endpoints are hit with legacy
registries disabled for the following commands: pull, push, build, run and
login. Assert the opposite when legacy registries are not disabled.
Signed-off-by: Richard Scothern <richard.scothern@gmail.com>
Use `pkg/discovery` to provide nodes discovery between daemon instances.
The functionality is driven by two different command-line flags: the
experimental `--cluster-store` (previously `--kv-store`) and
`--cluster-advertise`. It can be used in two ways by interested
components:
1. Externally by calling the `/info` API and examining the cluster store
field. The `pkg/discovery` package can then be used to hit the same
endpoint and watch for appearing or disappearing nodes. That is the
method that will for example be used by Swarm.
2. Internally by using the `Daemon.discoveryWatcher` instance. That is
the method that will for example be used by libnetwork.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
This PR adds a "request ID" to each event generated, the 'docker events'
stream now looks like this:
```
2015-09-10T15:02:50.000000000-07:00 [reqid: c01e3534ddca] de7c5d4ca927253cf4e978ee9c4545161e406e9b5a14617efb52c658b249174a: (from ubuntu) create
```
Note the `[reqID: c01e3534ddca]` part, that's new.
Each HTTP request will generate its own unique ID. So, if you do a
`docker build` you'll see a series of events all with the same reqID.
This allow for log processing tools to determine which events are all related
to the same http request.
I didn't propigate the context to all possible funcs in the daemon,
I decided to just do the ones that needed it in order to get the reqID
into the events. I'd like to have people review this direction first, and
if we're ok with it then I'll make sure we're consistent about when
we pass around the context - IOW, make sure that all funcs at the same level
have a context passed in even if they don't call the log funcs - this will
ensure we're consistent w/o passing it around for all calls unnecessarily.
ping @icecrime @calavera @crosbymichael
Signed-off-by: Doug Davis <dug@us.ibm.com>
- Print the mount table as in /proc/self/mountinfo
- Do not exit prematurely when one of the ipc mounts doesn't exist.
- Do not exit prematurely when one of the ipc mounts cannot be unmounted.
- Add a unit test to see if the cleanup really works.
- Use syscall.MNT_DETACH to cleanup mounts after a crash.
- Unmount IPC mounts when the daemon unregisters an old running container.
Signed-off-by: David Calavera <david.calavera@gmail.com>
If an invalid logger address is provided on daemon start it will
silently fail. As syslog driver is doing, this check should be done on
daemon start and prevent it from starting even in other drivers.
This patch also adds integration tests for this behavior.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
If you don't have cgroup swap memory support, `dockerCmd`'s output in
these tests will be polluted by a warning from the daemon and will fail
the tests.
No need to have memory swap support for these tests to run as it will
be reset to -1 and everything will continue correctly.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
`docker rename foo ''` would result in:
```
usage: docker rename OLD_NAME NEW_NAME
```
which is the old engine's way of return errors - yes that's in the
daemon code. So I fixed that error msg to just be normal.
While doing that I noticed that using an empty string for the
source container name failed but didn't print any error message at all.
This is because we would generate a URL like: ../containers//rename/..
which would cause a 301 redirect to ../containers/rename/..
however the CLI code doesn't actually deal with 301's - it just ignores
them and returns back to the CLI code/caller.
Rather than changing the CLI to deal with 3xx error codes, which would
probably be a good thing to do in a follow-on PR, for this immediate
issue I just added a cli-side check for empty strings for both old and
new names. This way we catch it even before we hit the daemon.
API callers will get a 404, assuming they follow the 301, for the
case of the src being empty, and the new error msg when the destination
is empty - so we should be good now.
Add tests for both cases too.
Signed-off-by: Doug Davis <dug@us.ibm.com>
Fixes#11957Fixes#12319
Also removes check for Darwin when the stdin reader is closed as it
doesn't appear to block any more.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
GET /containers/json route used to reply with and empty array `[]` when no
containers where available. Daemon containers list refactor introduced
this bug by declaring an empty slice istead of initializing it as well
and it was now replying with `null`.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
- refactor to make it easier to split the api in the future
- addition to check the existing test case and make sure it contains
some expected output
Signed-off-by: Morgan Bauer <mbauer@us.ibm.com>
- The build-time variables are passed as environment-context for command(s)
run as part of the RUN primitve. These variables are not persisted in environment of
intermediate and final images when passed as context for RUN. The build environment
is prepended to the intermediate continer's command string for aiding cache lookups.
It also helps with build traceability. But this also makes the feature less secure from
point of view of passing build time secrets.
- The build-time variables also get used to expand the symbols used in certain
Dockerfile primitves like ADD, COPY, USER etc, without an explicit prior definiton using a
ENV primitive. These variables get persisted in the intermediate and final images
whenever they are expanded.
- The build-time variables are only expanded or passed to the RUN primtive if they
are defined in Dockerfile using the ARG primitive or belong to list of built-in variables.
HTTP_PROXY, HTTPS_PROXY, http_proxy, https_proxy, FTP_PROXY and NO_PROXY are built-in
variables that needn't be explicitly defined in Dockerfile to use this feature.
Signed-off-by: Madhav Puri <madhav.puri@gmail.com>