With 32ba6ab from #9261, TempArchive now closes the underlying file and
cleans it up as soon as the file's contents have been read. When pushing
an image, PushImageLayerRegistry attempts to call Close() on the layer,
which is a TempArchive that has already been closed. In this situation,
Close() returns an "invalid argument" error.
Add a Close method to TempArchive that does a no-op if the underlying
file has already been closed.
Signed-off-by: Andy Goldstein <agoldste@redhat.com>
These two cases did not actually read the same content with each iteration
of the benchmark. After the first read, the buffer was consumed. This patch
corrects this by using a bytes.Reader and seeking to the beginning of the
buffer at the beginning of each iteration.
Unfortunately, this benchmark was not actually as fast as we believed. But
the new results do bring its results closer to those of the other benchmarks.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Currently devicemapper CreateDevice and CreateSnapDevice keep on retrying
device creation till a suitable device id is found.
With new transaction mechanism we need to store device id in transaction
before it has been created.
So change the logic in such a way that caller decides the devices Id to
use. If that device Id is not available, caller bumps up the device Id
and retries.
That way caller can update transaciton too when it tries a new Id. Transaction
related patches will come later in the series.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
I noticed that 3 of the tarsum test cases had expected a tarsum with
a sha256 hash of
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
As I've been working with sha256 quite a bit lately, it struck me that
this is the initial digest value for sha256, which means that no data
was processed. However, these tests *do* process data. It turns out that
there was a bug in the test handling code which did not wait for tarsum
to end completely. This patch corrects these test cases.
I'm unaware of anywhere else in the code base where this would be an issue,
though we definitily need to look out in the future to ensure we are
completing tarsum reads (waiting for EOF).
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Current implementation is comingling things that ought not be together.
There are _some_ similarities between parsing for the different proto
types, but they are more different than alike, making the code extremely
difficult to reason about.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This reverts commit 967a42f116.
Signed-off-by: Tatsushi Inagaki <e29253@jp.ibm.com>
Roll back the change to fix the parameter of HumanSize from int64 to float64
This moves the IsGIT and IsURL functions out of the generic `utils`
package and into their own `urlutil` pkg.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: Tibor Vass <teabee89@gmail.com>
Conflicts:
pkg/archive/archive.go
fixed conflict which git couldn't fix with the added BreakoutError
Conflicts:
pkg/archive/archive_test.go
fixed conflict in imports
I also needed to add a mflag.IsSet() function that allows you to check
to see if a certain flag was actually specified on the cmd line.
Per #9221 - also tweaked the docs to fix a typo.
Closes#9221
Signed-off-by: Doug Davis <dug@us.ibm.com>
This fixes the removal of TempArchives which can read with only one
read. Such archives weren't getting removed because EOF wasn't being
triggered.
Docker-DCO-1.1-Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com> (github: unclejack)
Now filter name is trimmed and lowercased before evaluation for case
insensitive and whitespace trimemd check.
Signed-off-by: Oh Jinkyun <tintypemolly@gmail.com>
Currently we set up a cookie and upon failure not call UdevWait(). This
does not cleanup the cookie and associated semaphore and system will
soon max out on total number of semaphores.
To avoid this, call UdevWait() even in failure path which in turn will
cleanup associated semaphore.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Vincent Batts <vbatts@redhat.com>
pkg/archive contains code both invoked from cli (cross platform) and
daemon (linux only) and Unix-specific dependencies break compilation on
Windows. We extracted those stat-related funcs into platform specific
implementations at pkg/system and added unit tests.
Signed-off-by: Ahmet Alp Balkan <ahmetb@microsoft.com>
Some parts of pkg/archive is called on both client/daemon code. To get
it compiling on Windows, these funcs are extracted into files with
build tags.
Signed-off-by: Ahmet Alp Balkan <ahmetb@microsoft.com>
SIGCHLD and SIGWINCH used in api/client (cli code) are not
available on Windows. Extracting into separate files with build
tags.
Signed-off-by: Ahmet Alp Balkan <ahmetb@microsoft.com>
Otherwise udev can unecessarily execute various rules (and issue
scanning IO, etc) against the thin-pool -- which can never be a
top-level device.
Docker-DCO-1.1-Signed-off-by: Mike Snitzer <snitzer@redhat.com> (github: snitm)
The current Dev version of TarSum includes hashing of extended
file attributes and omits inclusion of modified time headers.
I refactored the logic around the version differences to make it
more clear that the difference between versions is in how tar
headers are selected and ordered.
TarSum Version 1 is now declared with the new Dev version continuing
to track it.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
This re-applies commit b39d02b with additional iptables rules to solve the issue with containers routing back into themselves.
The previous issue with this attempt was that the DNAT rule would send traffic back into the container it came from. When this happens you have 2 issues.
1) reverse path filtering. The container is going to see the traffic coming in from the outside and it's going to have a source address of itself. So reverse path filtering will kick in and drop the packet.
2) direct return mismatch. Assuming you turned reverse path filtering off, when the packet comes back in, it's goign to have a source address of itself, thus when the reply traffic is sent, it's going to have a source address of itself. But the original packet was sent to the host IP address, so the traffic will be dropped because it's coming from an address which the original traffic was not sent to (and likely with an incorrect port as well).
The solution to this is to masquerade the traffic when it gets routed back into the origin container. However for this to work you need to enable hairpin mode on the bridge port, otherwise the kernel will just drop the traffic.
The hairpin mode set is part of libcontainer, while the MASQ change is part of docker.
This reverts commit 63c303eecd.
Docker-DCO-1.1-Signed-off-by: Patrick Hemmer <patrick.hemmer@gmail.com> (github: phemmer)
By default is a demo of file differences, but can be used to create a
tar of changes between an old and new path.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
In an effort to make layer content 'stable' between import
and export from two different graph drivers, we must resolve
an issue where AUFS produces metadata files in its layers
which other drivers explicitly ignore when importing.
The issue presents itself like this:
- Generate a layer using AUFS
- On commit of that container, the new stored layer contains
AUFS metadata files/dirs. The stored layer content has some
tarsum value: '1234567'
- `docker save` that image to a USB drive and `docker load`
into another docker engine instance which uses another
graph driver, say 'btrfs'
- On load, this graph driver explicitly ignores any AUFS metadata
that it encounters. The stored layer content now has some
different tarsum value: 'abcdefg'.
The only (apparent) useful aufs metadata to keep are the psuedo link
files located at `/.wh..wh.plink/`. Thes files hold information at the
RW layer about hard linked files between this layer and another layer.
The other graph drivers make sure to copy up these psuedo linked files
but I've tested out a few different situations and it seems that this
is unnecessary (In my test, AUFS already copies up the other hard linked
files to the RW layer).
This changeset adds explicit exclusion of the AUFS metadata files and
directories (NOTE: not the whiteout files!) on commit of a container
using the AUFS storage driver.
Also included is a change to the archive package. It now explicitly
ignores the root directory from being included in the resulting tar archive
for 2 reasons: 1) it's unnecessary. 2) It's another difference between
what other graph drivers produce when exporting a layer to a tar archive.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)