err must be nil at that point.
This also un-indents the success case, so that
it proceeds as a straight-line code.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
When we're creating a layer using another layer as a template, add the
new layer's uncompressed and compressed digest to the maps we use to
index layers using those digests.
When we forgot to do that, searching for a layer by either would still
turn up the original template, so this didn't really break anything.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
We mistakenly mixed up the uncompressed and compressed digests when
populating the by-uncompressed-digest map.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When we create a copy of an image's top layer that's intended to be
identical to the top layer, except for having some set of ID mappings
already applied to it, copy over the template layer's compressed and
uncompressed digest and size information, compression information,
tar-split data, and lists of used UIDs and GIDs, if we have them.
The lack of sizing information was forcing ImageSize() to regenerate the
diffs to determine the size of the mapped layers, which shouldn't have
been necessary.
Teach the overlay DiffGetter to look for files in the diff directories
of lower layers if we can't find them in the current layer, so that
tar-split can retrieve content that we didn't have to pull up.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Adds AddNames and RemoveNames so operations which are invoked in parallel
manner can use it without destroying names from storage.
For instance
We are deleting names which were already written in store.
This creates faulty behavior when builds are invoked in parallel manner, as
this removes names for other builds.
To fix this behavior we must append to already written names and
override if needed. But this should be optional and not break public API
Following patch will be used by parallel operations at podman or buildah end, directly or indirectly.
Signed-off-by: Aditya R <arajan@redhat.com>
Account for the "diff != nil" path; try to remove even
the metadata of a layer on a failure saving.
Not that there's _much_ hope to be able to save
the version without the new layer when we weren't able
to save the version with the new layer.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Currently, if the attempts to recover from a failure
themselves fail, we don't record that at all.
That makes diagnosing the situation, or hypothetically
detecting that the cleanup could never work, much
harder.
So, log the errors.
Alternatively, we could include those failures as extra
text in the returned error; that's less likely to be lost
(e.g. a Go caller would have the extra text available, without
setting up extra infrastructure to capture logs), but possibly
harder to follow.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Don't ReadAll() from a Reader to create a buffer and then create another
Reader to read from that buffer.
Don't close a file and a decompressor that we're using to read the
file's contents when we we may still need to read from them after the
current function returns.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This allows callers of Store.PutLayer to provide the values if
they have already computed them, so that ApplyDiff does not need
to compute them again.
This could quite significantly reduce CPU usage.
The code is a bit clumsy in the use of compressedWriter; it
might make sense to implement a ReadCounter counterpart to the
existing WriteCounter.
(Note that it remains the case that during pulls, both c/image/storage
and ApplyDiff decompress the stream; c/image/storage stores the
compressed, not the decompressed, version in a temporary file.
Nothing changes about that here - it's not obvious that changing it
is worth it, and anyway it's a different concept for a different
PR/discussion.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Have it count the input to idLogger instead of uncompressedDigester;
they should get exactly the same data, but we are going to make
uncompressedDigester optional.
Also make the uncompressedDigester use a separate line so that we
can later change it more easily.
Should not change (observable) behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It is a tiny bit expensive, but most importantly
this moves the uses of {un,}compressedDigester so that
we can later make them optional.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Have one section deal with detecting compression and re-assembling
the original stream, and another with computing the length and digest
of the original stream.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
When we're applying a diff, we compress the headers and stash them
elsewhere so that we can use them to correctly reconstruct the layer if
we need to extract its contents later.
By default, the compression uses a 1MB block, and up to GOMAXPROCS
threads, which results in allocating GOMAXPROCS megabytes of memory up
front. That can be much more than we need, especially if the system has
many, many cores. Drop it down to 1 megabyte.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This helps long running processes like CRI-O to determine changes to the
local storage, while handling corrupted images automatically.
The corresponding fix in Podman [0] handles corrupt layers by reloading
the image. This does not work for CRI-O, because it will not reload the
storage on subsequent calls of pullImage if no storage modification has
been done.
[0]: b4bd886fcc
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
Currently, layers aquired from additional layer store cannot be exported
(e.g. `podman save`, `podman push`).
This is because the current additional layer store exposes only *extracted view*
of layers. Tar is not reproducible so the runtime cannot reproduce the tar
archive that has the same diff ID as the original.
This commit solves this issue by introducing a new API "`blob`" to the
additional layer store. This file exposes the raw contents of that layer. When
*(c/storage).layerStore.Diff is called, it acquires the diff contents from this
`blob` file which the same digest as the original layer.
Signed-off-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
similarly to containers and images, add support for storing big data
also for layers, so that it is possible to store arbitrary data when
it is not feasible to embed it in the layers JSON metadata file.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Fix the logic in an anon-func looking for the `ro` option to allow for
mounting images that are set as read-only.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
# - type: feat, fix, docs, style, refactor, test, chore
# - scope: can be empty (eg. if the change is a global or difficult to assign to a single component)
# - subject: start with verb (such as 'change'), 50-character line
body: 72-character wrapped. This should answer:
# * Why was this change necessary?
# * How does it address the problem?
# * Are there any side effects?
footer:
# - Include a link to the ticket, if any.
# - BREAKING CHANGE
Signed-off-by: zvier <zvier20@gmail.com>
We want to block the mounting of additional stores, for
read/write, since they will not have any containers associated
with them. But if a user is mounting an image for read/only
access then their is no reason to block the mount.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The `layerstore.Load()` on `new[RO]LayerStore` required a held lock on
store initialization. Since the consumers of the layer storage already
call `Load()`, it should not be necessary to lock on initialization of
the layer store.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
There are cases where the storage database gets out of whack with
whether or not the storage is actually mounted. We need to check
before returning the mount point.
1 A user could go in and umount the storage.
2 If the storage was mounted in a different mount namespace and then
the mount namespace goes away the counter will never get decremented
even though the mount point was removed.
3. If storage runtime is on non tmpfs storage a system reboot could be
done that will not clear the mount count.
This patch will fix the problem with the layer not being mounted, but
we still have a problem in that we can't figure out when to umount the
image. Not sure that is a solveable problem.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When cleaning up an incomplete layer, don't call regular Delete() to
handle it, since that calls Save(), which tries to lock the mountpoints
list, which we've already obtained a lock over. Add a variation on
Delete() that skips the Save() step, which we're about to do anyway, and
call that instead.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>