In go one should never modify a slice while also iterating over it at
the same time. This causes weird side effects as the underlying array
elements are shifted around without the range loop index knowing.
So if you delete a element the loop will then actually skip the next one
and theoretically access out of bounds on the last element which does
not panic but rather return the default zero type, nil here which then
causes the panic on layer.Flags == nil.
Here is a simple example to show the behavior:
func main() {
slice := []int{1, 2, 3, 4, 5, 6, 7, 8, 9}
for _, num := range slice {
if num == 5 {
slice = slices.DeleteFunc(slice, func(n int) bool {
return n == 5
})
}
fmt.Println(num)
}
}
The loop will not print 6, but then as last number it prints 0 (the
default zero type for an int).
Fixes#2184
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
(cherry picked from commit 99b0d2d423)
Removes the duplicate copy*Map function using the general function newMapFrom.
Reduces the allocation of empty maps using the copyMapPrefferingNil function.
This change may affect the behavior so that instead of an empty allocated map, a nil will be returned.
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Removes duplicate copy*Slice functions using a generic copy function
or replaces them with the slices.Clone function.
Also simplifies the stringSliceWithoutValue function.
These changes should not change the behavior.
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
The current value obtained by summing the sizes of regular file contents
does not match the size of the uncompressed layer tarball.
We don't have a convenient source to compute the correct size
for estargz without pulling the full layer and defeating the point;
so we must allow for the size being unknown.
For recent zstd:chunked images, we have the full tar-split,
so we can compute the correct size; that will happen in
the following commits.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
because it does not return nil when the slice length is 0.
This behavior caused the slices.Clone function to allocate
a unnecessary amount of memory when the slice length is 0,
and the c/common tests failed.
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
the global singleton was never updated, causing the cache to be always
recreated for each layer.
It is not possible to keep the layersCache mutex for the entire load()
since it calls into some store APIs causing a deadlock since
findDigestInternal() is already called while some store locks are
held.
Another benefit is that now only one goroutine can run load()
preventing multiple calls to load() to happen in parallel doing the
same work.
Closes: https://github.com/containers/storage/issues/2023
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Use the "slices", "maps" standard library packages, or other
readily-available features.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Conservatively use Index* + Delete to delete the
first element where it's not obvious that the code would really
want to delete all instances.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
if the compressed digest was validated, as it happens when
'pull_options = {convert_images = "true"}' is set, then store it as
well so that reusing the blob by its compressed digest works.
Previously, when an image converted to zstd:chunked was pulled a
second time, it would not be recognized by its compressed digest,
resulting in the need to re-pull the image again.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
SetBigData itself calls saveFor; so doing that before raises
fewer questions about stale data / stepping over each other.
The change in timing is externally-observable, but should hopefully
not matter much in practice, because this code is typically called
from layerStore.create as a part of an atomic create+populate operation
proptected by incompleteFlag.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Add a race-condition-free alternative to using CreateLayer and
ApplyDiffFromStagingDirectory, ensuring the store is locked for the
entire duration while the layer is being created and populated.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
enforce that the stagingDirectory must have the same value as the
diffOutput.Target variable. It allows to simplify the internal API.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This allows us to correctly set (CompresedDigest, CompressedSize)
when copying data from another layer; in that case we don't have the
compressed data, so computing the size from compressedCounter
sets an incorrect value.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
introduce the TOCDigest field for a layer. TOCDigest is designed to
store the digest of the Table of Contents (TOC) of the blob.
It is useful when the UncompressedDigest cannot be validated during a
partial image pull, but the TOC itself is validated.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
AFAICS this call is intended to "remap" the parent layer's contents to the
desired IDMappings; but when there is no parent layer, there is
nothing to remap.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
change the file format to store the tar-split as part of the
zstd:chunked image. This will allow clients to rebuild the entire
tarball without having to download it fully.
also store the uncompressed digest for the tarball, so that it can be
stored into the storage database.
Needs: https://github.com/containers/image/pull/1976
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Handle old-fashioned ID mappings when looking at layers. Nowadays,
we'll use an idmapped mount if we can, but we shouldn't blow up if we
had to chown a layer because we couldn't use an idmapped mount.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
tarLogger calls the provided callback in a separate
goroutine, and that can happen after tarLogger.Write
returns; tarLogger.Close is requried to ensure the callbacks
have all been correctly called, and the created uidLog and gidLog
values can be consumed.
So, move most of the IO pipeline that is formed around the
layer stream into a nested function that terminates earlier, notably
so that the "defer idLogger.Close()" is called at the appropriate time.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We will want to move the next part of the code
into a closure; move variables that will be
accessed outside of that section.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
AFAICS that can't fail with current pgzip; and
pgzip.NewWriter also calls NewWriteLevel, but it just
swallows the error.
Any failure would therefore be very unexpected;
report it instead of suppressing it.0
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
gofumpt is a superset of gofmt, enabling some more code formatting
rules.
This commit is brought to you by
gofumpt -w .
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
The lockfile's write record is now updated prior to the actual write
operation. This ensures that, in the event of an unexpected
termination, other processes are correctly notified of an initiated
write operation.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The documentation says
> The new Buffer takes ownership of buf, and the
> caller should not use buf after this call.
so use the more directly applicable, and simpler, bytes.Reader
instead, to avoid this potentially risky use.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Implement ListLayers() for the aufs, btrfs, and devicemapper drivers,
along with a unit test for them.
Stop filtering out directories with names that aren't 64-hex chars in
vfs and overlay ListLayers() implementations, which is more a convention
than a hard rule.
Have layerStore.Wipe() try to remove remaining listed layers after it
removes the layers that the layerStore knew of.
Close() a dangling ReadCloser in NaiveCreateFromTemplate.
Switch from using plain defer to using t.Cleanup() to handle deleting
layers that tests create, have the addManyLayers() test function do so
as well.
Remove vfs.CopyDir, which near as I can tell isn't referenced anywhere.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
We previously started "pulling up" images when we changed their names,
and started denying the presence of images in read-only stores which
shared their ID with an image in the read-write store, so that it would
be possible to "remove" names from an image in read-only storage. We
forgot about the Flags field, so start pulling that up, too.
Do all of the above when we're asked to create an image, since denying
the presence of images with the same ID in read-only stores would
prevent us from finding the image by any of the names that it "had" just
a moment before we created the new record.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When updateNames() copies an image's record from a read-only store into
the read-write store, copy the accompanying data as well.
Add fields for setting data items at creation-time to LayerOptions,
ImageOptions, and ContainerOptions to make this easier for us and our
consumers.
Replace the store-specific Create() (and the one CreateWithFlags() and
Put()) with private create() and put() methods, since they're not
intended for consumption outside of this package, and add Flags to the
options structures we pass into those methods. In create() methods,
make copies of those passed-in options structures before modifying any
of their contents.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
In that case, we can just get read locks, confirm that nothing has changed,
and continue; no need for any serialization on exclusively holding
loadMut / inProcessLock.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead of basing this on exclusivity loading via loadMut (which was incorrect,
because contrary to the original design, the r.layerspathModified
check in r.Modified() could trigger during the lifetime of a read lock)
use a very traditional read-write lock to protect the fields of imageStore.
Also explicitly document how concurrent access to fields of imageStore
is managed.
Note that for the btrfs and zfs graph drivers, Diff() can trigger
Mount() and unmount() in a way that violates the locking design.
That's not fixed in this PR.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This should be fixed, it just seems too hard to do without
breaking API (and performance).
So, just be clear about that to warn future readers.
It's tracked in https://github.com/containers/storage/issues/1379 .
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We can't safely do that because the read-only callers don't allow us
to write to layerStore state.
Luckily, with the recent changes to Mounted, we don't really need to
reload in those places.
Also, fairly extensively document the locking design or implications
for users.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead of reading that value, releasing the mount lock,
and then unmounting, provide a "conditional" unmount mode.
And use that in the existing "loop unmounting" code.
That's at least safer against concurrent processes unmounting
the same layer. But the callers that try to "really unmount"
the layer in a loop are still possibly racing against other processes
trying to mount the layer in the meantime.
I'm not quite sure that we need the "conditional" parameter as an
explicit choice; it seems fairly likely that Umount() should just fail
with ErrLayerNotMounted for all !force callers. I chose to use the flag
to be conservative WRT possible unknown constraints.
Similarly, it's not very clear to me that the unmount loops need to exist;
maybe they should just be unmount(force=true, conditional=true).
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
The lockfile we use propertly handles the case that we Touch() it. In
other words, a later Modified() call will return false.
However, we're also looking at the mtime, which was failing. This
uses the new AtomicWriteFileWithOpts() feature to also record the
mtime of the file we write on updates.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This was using the graphDriver field without locks, and the graph driver itself,
while the implementation assumed exclusivity.
Luckily all callers are actually holding the layer store lock for writing, so
use that for exclusion. (layerStore already seems to extensively assume
that locking the layer store for writing guarantees exclusive access to the graph driver,
and because we always recreate a layer store after recreating the graph driver,
that is true in practice.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>