Remove all unneeded disk operations (reload TagStore, umarshal image)
for checking if image still points to same ID. Now slowest part is
queries to sqlite which hopefuly will be removed soon.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
Tags and digests are kept in the same storage. We want to make sure that they are completely separated - tags are something users set and digests can only be set by pull-by-digest code path.
Reverts #14664
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Add a unit test for validateManifest which ensures extra data can't be
injected by adding data to the JSON object outside the payload area.
This also removes validation of legacy signatures at pull time. This
starts the path of deprecating legacy signatures, whose presence in the
very JSON document they attempt to sign is problematic. These
signatures were only checked for official images, and since they only
caused a weakly-worded message to be printed, removing the verification
should not cause impact.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Generate a hash chain involving the image configuration, layer digests,
and parent image hashes. Use the digests to compute IDs for each image
in a manifest, instead of using the remotely specified IDs.
To avoid breaking users' caches, check for images already in the graph
under old IDs, and avoid repulling an image if the version on disk under
the legacy ID ends up with the same digest that was computed from the
manifest for that image.
When a calculated ID already exists in the graph but can't be verified,
continue trying SHA256(digest) until a suitable ID is found.
"save" and "load" are not changed to use a similar scheme. "load" will
preserve the IDs present in the tar file.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Adds support for the daemon to handle user namespace maps as a
per-daemon setting.
Support for handling uid/gid mapping is added to the builder,
archive/unarchive packages and functions, all graphdrivers (except
Windows), and the test suite is updated to handle user namespace daemon
rootgraph changes.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
progressreader.Broadcaster becomes broadcaster.Buffered and
broadcastwriter.Writer becomes broadcaster.Unbuffered.
The package broadcastwriter is thus renamed to broadcaster.
Signed-off-by: Tibor Vass <tibor@docker.com>
Although having a request ID available throughout the codebase is very
valuable, the impact of requiring a Context as an argument to every
function in the codepath of an API request, is too significant and was
not properly understood at the time of the review.
Furthermore, mixing API-layer code with non-API-layer code makes the
latter usable only by API-layer code (one that has a notion of Context).
This reverts commit de41640435, reversing
changes made to 7daeecd42d.
Signed-off-by: Tibor Vass <tibor@docker.com>
Conflicts:
api/server/container.go
builder/internals.go
daemon/container_unix.go
daemon/create.go
Add a daemon flag to control this behaviour. Add a warning message when pulling
an image from a v1 registry. The default order of pull is slightly altered
with this changset.
Previously it was:
https v2, https v1, http v2, http v1
now it is:
https v2, http v2, https v1, http v1
Prevent login to v1 registries by explicitly setting the version before ping to
prevent fallback to v1.
Add unit tests for v2 only mode. Create a mock server that can register
handlers for various endpoints. Assert no v1 endpoints are hit with legacy
registries disabled for the following commands: pull, push, build, run and
login. Assert the opposite when legacy registries are not disabled.
Signed-off-by: Richard Scothern <richard.scothern@gmail.com>
This PR adds a "request ID" to each event generated, the 'docker events'
stream now looks like this:
```
2015-09-10T15:02:50.000000000-07:00 [reqid: c01e3534ddca] de7c5d4ca927253cf4e978ee9c4545161e406e9b5a14617efb52c658b249174a: (from ubuntu) create
```
Note the `[reqID: c01e3534ddca]` part, that's new.
Each HTTP request will generate its own unique ID. So, if you do a
`docker build` you'll see a series of events all with the same reqID.
This allow for log processing tools to determine which events are all related
to the same http request.
I didn't propigate the context to all possible funcs in the daemon,
I decided to just do the ones that needed it in order to get the reqID
into the events. I'd like to have people review this direction first, and
if we're ok with it then I'll make sure we're consistent about when
we pass around the context - IOW, make sure that all funcs at the same level
have a context passed in even if they don't call the log funcs - this will
ensure we're consistent w/o passing it around for all calls unnecessarily.
ping @icecrime @calavera @crosbymichael
Signed-off-by: Doug Davis <dug@us.ibm.com>
The original purpose of this was to cancel downloads if pullV2Tag
returns an error, preventing an associated crash (see #15353). The
broadcaster now accomplishes the same thing that the pipe does, making
the pipe redundant. When pullV2Tag returns, all broadcasters are closed,
which means all further writes to those broadcasters will return errors.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Before, this only waited for the download to complete. There was no
guarantee that the layer had been registered in the graph and was ready
use. This is especially problematic with v2 pulls, which wait for all
downloads before extracting layers.
Change Broadcaster to allow an error value to be propagated from Close
to the waiters.
Make the wait stop when the extraction is finished, rather than just the
download.
This also fixes v2 layer downloads to prefix the pool key with "layer:"
instead of "img:". "img:" is the wrong prefix, because this is what v1
uses for entire images. A v1 pull waiting for one of these operations to
finish would only wait for that particular layer, not all its
dependencies.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Ensure that layers are not excluded from manifests based on previous pushes.
Continue skipping pushes on layers which were pushed by a previous tag.
Update push multiple tag tests.
Ensure that each tag pushed exists on the registry and is pullable.
Add output comparison on multiple tag push check.
fixes#15536
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
The Docker Daemon should send actual actions client ask for to issue tokens,
not all the permissions that client is guaranteed.
Signed-off-by: xiekeyang <xiekeyang@huawei.com>
This file was not well documented and had very high cyclomatic complexity.
This patch completely rearranges this file and the ImageDelete method to
be easier to follow and more maintainable in the future.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
- Rename to Broadcaster
- Document exported types
- Change Wait function to just wait. Writing a message to the writer and
adding the writer to the observers list are now handled by separate
function calls.
- Avoid importing logrus (the condition where it was used should never
happen, anyway).
- Make writes non-blocking
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Previously, its other return value was used even when it returned an
error. This is awkward and goes against the convention. It also could
have resulted in a nil pointer dereference when an error was returned
because of an unknown pool type. This changes the unknown pool type
error to a panic (since the pool types are hardcoded at call sites and
must always be "push" or "pull"), and returns a "found" boolean instead
of an error.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
Based on #12874 from Sam Abed <sam.abed@gmail.com>. His original commit
was brought up to date by manually porting the changes in pull.go into
the new code in pull_v1.go and pull_v2.go.
Fixes#8385
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
The practice of buffering to a tempfile during a pushing contributes massively
to slow V2 push performance perception. The protocol was actually designed to
avoid precalculation, supporting cut-through data push. This means we can
assemble the layer, calculate its digest and push to the remote endpoint, all
at the same time.
This should increase performance massively on systems with slow disks or IO
bottlenecks.
Signed-off-by: Stephen J Day <stephen.day@docker.com>