AtomicWriteFile truly is atomic, it only changes the file
on success. So there's no point notifying other processes about
a changed file if we failed, they are going to see the same JSON data.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
There was a possibility to panic due to such behavior:
attempted to update last-writer in lockfile without the write lock
Fixes: https://github.com/containers/storage/issues/1324
Signed-off-by: Mikhail Khachayants <tyler92@inbox.ru>
When removing layers, try to remove layers before removing their
parents, as the lower-level driver may enforce the dependency beyond
what the higher-level logic does or knows.
Do this by sorting by creation time as an attempt at flattening the
layer tree into an ordered list. It won't be correct if a parent layer
needed to be recreated, but it's more likely to be correct otherwise.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
There was a race condition if a goroutine accessed to layerStore public
methods under RO lock and at the same time ReloadIfChanged was called.
In real life, it can occurr when there are two concurrent PlayKube
requests in Podman.
Signed-off-by: Mikhail Khachayants <tyler92@inbox.ru>
We now use the golang error wrapping format specifier `%w` instead of the
deprecated github.com/pkg/errors package.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
... so that we don't repeat it all over the place.
Introduce a pretty ugly cleanupFailureContext variable
for that purpose; still, it's better than copy&pasting everything.
This will be even more useful soon.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We will need want to refer to "layer" in a defer
block, in order to delete that layer. That doesn't work
with "layer" being a named return value, because a
(return nil, -1, ...) sets "layer" to nil.
So, turn "layer" into a local variable, and use an unnamed
return value. And beause all return values must be named,
or unnamed, consistently, turn "size" and "err" also into
local variables.
Then decrease the scope of the "size" and "err" local variables
to simplify understanding the code a bit.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
For now, this only causes two redundant saves for
non-tarball layers, which is not useful; but it will allow
us to build infrastructure for saving the incomplete record
much earlier.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
err must be nil at that point.
This also un-indents the success case, so that
it proceeds as a straight-line code.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
When we're creating a layer using another layer as a template, add the
new layer's uncompressed and compressed digest to the maps we use to
index layers using those digests.
When we forgot to do that, searching for a layer by either would still
turn up the original template, so this didn't really break anything.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
We mistakenly mixed up the uncompressed and compressed digests when
populating the by-uncompressed-digest map.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When we create a copy of an image's top layer that's intended to be
identical to the top layer, except for having some set of ID mappings
already applied to it, copy over the template layer's compressed and
uncompressed digest and size information, compression information,
tar-split data, and lists of used UIDs and GIDs, if we have them.
The lack of sizing information was forcing ImageSize() to regenerate the
diffs to determine the size of the mapped layers, which shouldn't have
been necessary.
Teach the overlay DiffGetter to look for files in the diff directories
of lower layers if we can't find them in the current layer, so that
tar-split can retrieve content that we didn't have to pull up.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Adds AddNames and RemoveNames so operations which are invoked in parallel
manner can use it without destroying names from storage.
For instance
We are deleting names which were already written in store.
This creates faulty behavior when builds are invoked in parallel manner, as
this removes names for other builds.
To fix this behavior we must append to already written names and
override if needed. But this should be optional and not break public API
Following patch will be used by parallel operations at podman or buildah end, directly or indirectly.
Signed-off-by: Aditya R <arajan@redhat.com>
Account for the "diff != nil" path; try to remove even
the metadata of a layer on a failure saving.
Not that there's _much_ hope to be able to save
the version without the new layer when we weren't able
to save the version with the new layer.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Currently, if the attempts to recover from a failure
themselves fail, we don't record that at all.
That makes diagnosing the situation, or hypothetically
detecting that the cleanup could never work, much
harder.
So, log the errors.
Alternatively, we could include those failures as extra
text in the returned error; that's less likely to be lost
(e.g. a Go caller would have the extra text available, without
setting up extra infrastructure to capture logs), but possibly
harder to follow.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Don't ReadAll() from a Reader to create a buffer and then create another
Reader to read from that buffer.
Don't close a file and a decompressor that we're using to read the
file's contents when we we may still need to read from them after the
current function returns.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This allows callers of Store.PutLayer to provide the values if
they have already computed them, so that ApplyDiff does not need
to compute them again.
This could quite significantly reduce CPU usage.
The code is a bit clumsy in the use of compressedWriter; it
might make sense to implement a ReadCounter counterpart to the
existing WriteCounter.
(Note that it remains the case that during pulls, both c/image/storage
and ApplyDiff decompress the stream; c/image/storage stores the
compressed, not the decompressed, version in a temporary file.
Nothing changes about that here - it's not obvious that changing it
is worth it, and anyway it's a different concept for a different
PR/discussion.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Have it count the input to idLogger instead of uncompressedDigester;
they should get exactly the same data, but we are going to make
uncompressedDigester optional.
Also make the uncompressedDigester use a separate line so that we
can later change it more easily.
Should not change (observable) behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It is a tiny bit expensive, but most importantly
this moves the uses of {un,}compressedDigester so that
we can later make them optional.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Have one section deal with detecting compression and re-assembling
the original stream, and another with computing the length and digest
of the original stream.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
When we're applying a diff, we compress the headers and stash them
elsewhere so that we can use them to correctly reconstruct the layer if
we need to extract its contents later.
By default, the compression uses a 1MB block, and up to GOMAXPROCS
threads, which results in allocating GOMAXPROCS megabytes of memory up
front. That can be much more than we need, especially if the system has
many, many cores. Drop it down to 1 megabyte.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This helps long running processes like CRI-O to determine changes to the
local storage, while handling corrupted images automatically.
The corresponding fix in Podman [0] handles corrupt layers by reloading
the image. This does not work for CRI-O, because it will not reload the
storage on subsequent calls of pullImage if no storage modification has
been done.
[0]: b4bd886fcc
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
Currently, layers aquired from additional layer store cannot be exported
(e.g. `podman save`, `podman push`).
This is because the current additional layer store exposes only *extracted view*
of layers. Tar is not reproducible so the runtime cannot reproduce the tar
archive that has the same diff ID as the original.
This commit solves this issue by introducing a new API "`blob`" to the
additional layer store. This file exposes the raw contents of that layer. When
*(c/storage).layerStore.Diff is called, it acquires the diff contents from this
`blob` file which the same digest as the original layer.
Signed-off-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
similarly to containers and images, add support for storing big data
also for layers, so that it is possible to store arbitrary data when
it is not feasible to embed it in the layers JSON metadata file.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Fix the logic in an anon-func looking for the `ro` option to allow for
mounting images that are set as read-only.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
# - type: feat, fix, docs, style, refactor, test, chore
# - scope: can be empty (eg. if the change is a global or difficult to assign to a single component)
# - subject: start with verb (such as 'change'), 50-character line
body: 72-character wrapped. This should answer:
# * Why was this change necessary?
# * How does it address the problem?
# * Are there any side effects?
footer:
# - Include a link to the ticket, if any.
# - BREAKING CHANGE
Signed-off-by: zvier <zvier20@gmail.com>
We want to block the mounting of additional stores, for
read/write, since they will not have any containers associated
with them. But if a user is mounting an image for read/only
access then their is no reason to block the mount.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The `layerstore.Load()` on `new[RO]LayerStore` required a held lock on
store initialization. Since the consumers of the layer storage already
call `Load()`, it should not be necessary to lock on initialization of
the layer store.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
There are cases where the storage database gets out of whack with
whether or not the storage is actually mounted. We need to check
before returning the mount point.
1 A user could go in and umount the storage.
2 If the storage was mounted in a different mount namespace and then
the mount namespace goes away the counter will never get decremented
even though the mount point was removed.
3. If storage runtime is on non tmpfs storage a system reboot could be
done that will not clear the mount count.
This patch will fix the problem with the layer not being mounted, but
we still have a problem in that we can't figure out when to umount the
image. Not sure that is a solveable problem.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When cleaning up an incomplete layer, don't call regular Delete() to
handle it, since that calls Save(), which tries to lock the mountpoints
list, which we've already obtained a lock over. Add a variation on
Delete() that skips the Save() step, which we're about to do anyway, and
call that instead.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
If we need to re-save the layers list when we've loaded it, to either
solve a duplicate name issue or to clean up a partially-constructed
layer, don't make the mistake of attempting to take another lock on the
mounts list.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Using `os.Pipe()` has a side effect when writing tar files surfacing in
differing digests (see [1] for reference). Instead use `io.Pipe()` with
a workaround to avoid writes after the reader has been closed - which is
supported by `os.Pipe()` and causes the tar package to error otherwise.
[1] https://github.com/containers/libpod/pull/3705#issuecomment-517954910
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Before applying mount counts that we've just loaded, reset all of the
counts.
If a layer that we thought was mounted was unmounted by another process,
there won't be a record of it in the mounts list any more, so we
wouldn't reset the mount count on our record for that layer.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Modified patch of Kevin Pelzel.
Also changed ApplyDiff to take new ApplyDiffOpts Struct.
Signed-off-by: Kevin Pelzel <kevinpelzel22@gmail.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add a field to the Layer structure that lets us make note of the set of
UIDs and GIDs which own files in the layer, populated by scanning the
diff that we used to populate the layer, if there was one.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Use RLock() to lock stores that we know are read-only, and panic in
Lock() if we know that we're not a read-write lock.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add a RecursiveLock() API to allow for recursive acquisitions of a
writer lock within the same process space. This is yet another
requirement for the copy-detection mechanism in containers/image where
multiple goroutines can be pulling the same blob. Having a recursive
lock avoids a complex synchronization mechanism as the commit order is
determinted by the corresponding index in the image.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Separate loading and saving the mountpoints.json table out of the main
layer load/save paths so that they can be called independently, so that
we can mount and unmount layers (which requires that we update that
information) when the layer list itself may only be held with a read
lock.
The new loadMounts() and saveMounts() methods need to be called only for
read-write layer stores. Callers that just refer to the mount
information can take a read lock on the mounts information, but callers
that modify the mount information need to acquire a write lock.
Break the unwritten "stores don't manage their own locks" rule and have
the layer store handle managing the lock for the mountpoints list, with
the understanding that the layer store's lock will always have been
acquired before we try to take the mounts lock.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Don't attempt to remove conflicting names or finish layer cleanups if we
only have a read-only lock on layer or image stores, since doing either
means we'd have to modify the list of layers or images, and our lock
that we've obtained doesn't allow us to do that.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Clarify that Locker.Locked() checks if we have a write lock, since
that's what we care about whenever we check it.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Implement reader-writer locks to allow allow multiple readers to hold
the lock in parallel.
* The locks are still based on fcntl(2).
* Changing the lock from a reader to a writer and vice versa will block
on the syscall.
* A writer lock can be held only by one process. To protect against
concurrent accesses by gourtines within the same process space, use a
writer mutex.
* Extend the Locker interface with the `RLock()` method to acquire a
reader lock. If the lock is set to be read-only, all calls to
`Lock()` will be redirected to `RLock()`. A reader lock is only
released via fcntl(2) when all gourtines within the same process space
have unlocked it. This is done via an internal counter which is
protected (among other things) by an internal state mutex.
* Panic on violations of the lock protocol, namely when calling
`Unlock()` on an unlocked lock. This helps detecting violations in
the code but also protects the storage from corruption. Doing this
has revealed some bugs fixed in ealier commits.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Add a CreateFromTemplate() method to graph drivers, and use it instead
of a driver-oblivious diff/put method when we want to create a copy of
an image's top layer that has the same parent and which differs from the
original only in its ID maps.
This lets drivers that can quickly make an independent layer based on
another layer do something smarter than we were doing with the
driver-oblivious method. For some drivers, a native method is
dramatically faster.
Note that the driver needs to be able to do this while still exposing
just one notional layer (i.e., one link in the chain of layers for a
given container) to the higher levels of the APIs, so if the new layer
is actually a child of the template layer, that needs to remain a detail
that's private to the driver.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
from my tests, I've seen a net improvement of around 30% on the wall
clock time in decompressing layers.
These additional packages will need to be re-vendored:
github.com/klauspost/pgzip v1.2.1
github.com/klauspost/compress v1.4.1
github.com/klauspost/cpuid v1.2.0
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This check has been wrongly removed with #198
The check must stay as it's now part of the Stop/Delete API so reintroduce it back
This fixes#233 and the associated CRI-O issues
This PR + kubernetes-sigs/cri-o#1910 fully fix the issue
I'm going to revendor c/storage in CRI-O to full fix crio after this is merged
Signed-off-by: Antonio Murdaca <runcom@linux.com>
We've seen a panic on Azure with CRI-O/OCP:
Nov 08 17:52:58 master-000002 crio[5779]: panic: runtime error: invalid memory address or nil pointer dereference
Nov 08 17:52:58 master-000002 crio[5779]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x55cec3a16669]
Nov 08 17:52:58 master-000002 crio[5779]: goroutine 127 [running]:
Nov 08 17:52:58 master-000002 crio[5779]: panic(0x55cec467fda0, 0x55cec52cba20)
Nov 08 17:52:58 master-000002 crio[5779]: /opt/rh/go-toolset-7/root/usr/lib/go-toolset-7-golang/src/runtime/panic.go:551 +0x3c5 fp=0xc4206e17f0 sp=0xc4206e1750 pc=0x55cec2f47685
Nov 08 17:52:58 master-000002 crio[5779]: runtime.panicmem()
Nov 08 17:52:58 master-000002 crio[5779]: /opt/rh/go-toolset-7/root/usr/lib/go-toolset-7-golang/src/runtime/panic.go:63 +0x60 fp=0xc4206e1810 sp=0xc4206e17f0 pc=0x55cec2f46520
Nov 08 17:52:58 master-000002 crio[5779]: runtime.sigpanic()
Nov 08 17:52:58 master-000002 crio[5779]: /opt/rh/go-toolset-7/root/usr/lib/go-toolset-7-golang/src/runtime/signal_unix.go:388 +0x17e fp=0xc4206e1860 sp=0xc4206e1810 pc=0x55cec2f5d7fe
Nov 08 17:52:58 master-000002 crio[5779]: github.com/kubernetes-sigs/cri-o/vendor/github.com/containers/image/storage.(*storageImageDestination).Commit(0xc420556540, 0x55cec48b7fe0, 0xc4200ac048, 0x0, 0x0)
Nov 08 17:52:58 master-000002 crio[5779]: /builddir/build/BUILD/cri-o-71cc46544a8d31229c4ef2b88b42485f4d997c03/_output/src/github.com/kubernetes-sigs/cri-o/vendor/github.com/containers/image/storage/storage_imag
That nil pointer dereference is caused by containers/image storage
Commit() as it ignores ErrDuplicateID but the layer object is later
reused when nil.
This commit fixes the panic above by returning the layer in-use even on error so
containers/image won't panic.
I'll vendor this in c/image once merged and then in CRI-O.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
This patch adds a MountOpts field to the drivers so we can simplify
the interface to Get and allow additional options to be passed in the future.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If we needed to try to update the ID mappings on a just-created layer,
we were inadvertently failing to check that the layer had been
successfully created.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
I have experienced "layer not known" corruption triggered by concurrent
buildah/skopeo processes, and hopefully lock sanity checks will help to
prevent this kind of problem.
Signed-off-by: Zac Medico <zmedico@gmail.com>
podman unmount wants to know if the image is only mounted 1 time
and refuse to unmount if the container state expects it to be mounted.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add force to umount to force the umount of a container image
Add an interface to indicate whether or not the layer is mounted
Add a boolean return from unmount to indicate when the layer is really unmounted
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When creating new Layers, Images, or Containers, only try to copy the
newly-created results if we actually created them.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add store methods for finding the list of UIDs and GIDs which probably
need to be mapped if a given layer or container's layer, which has to
have been mounted at least once in order for us to know where it goes,
is going to be used for a container that is run with the configured ID
mappings in a separate user namespace.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Tweak the order of arguments to LayerStore.Create()/CreateWithFlags()/Put()
so that the moreOptions struct is directly after the options map.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Expose reading and writing ID mapping in the archive and chrootarchive
packages, and in the driver interface. Generally this means that
when computing or applying diffs, we need to have ID mappings passed in
that are specific to the layers we're using.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add support to the Store objects for per-container UID/GID mapping.
* UID and GID maps can be specified when creating layers and containers.
* If mapping options are specified when creating a container, those
options are used for creating the layer which we create for the
container and recorded with the container for convenience.
* A layer defaults to using the ID mapping configured for its parent, or
to the default which was used to initialize the Store object if it has
no parent.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Always copy slices and maps in Layer, Image, and Container structures
before handing them back to callers so that, even if they modify them
directly, they won't accidentally mess with our in-memory copies of
those fields in the copies of the structures that we're using.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Currently when we do a commmit, we are mounting the container without using
the mountlabel. In certain situations we can leak mount points where the
image is already mounted with a label. If you then attempt to commit the
image, the kernel will attempt to mount the image without a label. The
kernel will reject this mount since SELinux does not allow the same image
to be mounted with different labels.
Passing down the label to the diff drivers, fixes this issue.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When we read itms from disk, if maps in the structures are empty, they
won't be allocated as part of the decoding process. When we
subsequently go to read or write something from such a map, make sure
it's been initialized.
Add some validation of names that we convert to file names, and of
digest values, so that we can be more precise about the error code we
return when there's a problem with the values.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Take a guess at the final size of some slices that we build up item by
item, and try to allocate enough capacity for them before starting to
build them. It's probably not a big speedup, though.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
We already deduplicated names in Store.SetNames(), but we weren't also
doing that when creating layers, images, and containers, or in the
individual store SetNames() methods.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Had to vendor in a new version of golang.org/x/net to build
Also had to make some changes to drivers to handle
archive.Reader -> io.Reader
archive.Archive -> io.ReadCloser
Also update .gitingore to ignore emacs files, containers-storage.*
and generated man pages.
Also no longer test travis against golang 1.7, cri-o, moby have also
done this.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Use the standard library's "errors" package to create errors so that
backtraces in wrapped errors terminate at the point where the error was
first wrapped, and not at the line where we created the error, which
isn't as useful for troubleshooting.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Properly heed the DiffOptions.Compression value when generating a layer
diff between a layer and its parent, when there's no tarsplit data.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Cache the digests and sizes of a diff, both compressed and uncompressed,
along with the type of compression detected for it, that's supplied to
ApplyDiff() or Put() in the layer structure, and add methods to find a
list of layers that match one or the other digest.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When Delete:ing a layer or a container the code was always allocating a
new slice just to remove an element from the original slice.
Profiling cri-o with c/storage showed that doing it at every delete is
pretty expensive:
```
. . 309: newContainers := []Container{}
. . 310: for _, candidate := range r.containers
{
. . 311: if candidate.ID != id {
528.17kB 528.17kB 312: newContainers =
append(newContainers, candidate)
. . 313: }
. . 314: }
. . 552: newLayers := []Layer{}
. . 553: for _, candidate := range
r.layers {
. . 554: if candidate.ID != id {
1.03MB 1.03MB 555: newLayers =
append(newLayers, candidate)
. . 556: }
. . 557: }
. . 558: r.layers = newLayers
```
This patch just filters out the element to remove from the original
slice w/o allocating a new slice. After this patch, no memory overhead
anymore is shown in the profiler.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Add an optional *DiffOptions parameter to Diff() methods (which can be
nil), to allow overriding of default behaviors.
At this time, that's just what type of compression is applied, if we
want something other than what was recorded when the diff was applied,
but we can add more later if needed.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Add a Created field to Layer, Image, and Container structures that we
intialize when creating one of them.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Implement read-only versions of layer and image store interfaces which
allocate read-only locks and which return errors whenever a write
function is called (which should only be possible after a type
assertion, since they're not part of the read-only interfaces).
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Split the LayerStore and ImageStore interfaces into read-only and
write-only subset interfaces, and make the proper stores into unions of
the read-only and write-only method sets.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
We need to be able to acquire locks on storage areas which aren't
mounted read-write, which return errors when we attempt to open a file
in the mode where we can take write locks on them. This patch adds a
read-only lock type for use in those cases.
A given file can be opened for read-locking or write-locking, but not
both. Our Locker interface gains an IsReadWrite() method to let callers
tell the difference.
Based on patches by Dan Walsh <dwalsh@redhat.com>
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Fix consistency errors we'd hit after creating or deleting a layer,
image, or container, by replacing the slice of items in their respective
stores with a slice of pointers to items, so that pointers in name- and
ID-based indexes don't become invalid when the slice is resized.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>