We previously started "pulling up" images when we changed their names,
and started denying the presence of images in read-only stores which
shared their ID with an image in the read-write store, so that it would
be possible to "remove" names from an image in read-only storage. We
forgot about the Flags field, so start pulling that up, too.
Do all of the above when we're asked to create an image, since denying
the presence of images with the same ID in read-only stores would
prevent us from finding the image by any of the names that it "had" just
a moment before we created the new record.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When updateNames() copies an image's record from a read-only store into
the read-write store, copy the accompanying data as well.
Add fields for setting data items at creation-time to LayerOptions,
ImageOptions, and ContainerOptions to make this easier for us and our
consumers.
Replace the store-specific Create() (and the one CreateWithFlags() and
Put()) with private create() and put() methods, since they're not
intended for consumption outside of this package, and add Flags to the
options structures we pass into those methods. In create() methods,
make copies of those passed-in options structures before modifying any
of their contents.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
In that case, we can just get read locks, confirm that nothing has changed,
and continue; no need for any serialization on exclusively holding
loadMut / inProcessLock.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead of basing this on exclusivity loading via loadMut (which was incorrect,
because contrary to the original design, the r.layerspathModified
check in r.Modified() could trigger during the lifetime of a read lock)
use a very traditional read-write lock to protect the fields of imageStore.
Also explicitly document how concurrent access to fields of imageStore
is managed.
Note that for the btrfs and zfs graph drivers, Diff() can trigger
Mount() and unmount() in a way that violates the locking design.
That's not fixed in this PR.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This should be fixed, it just seems too hard to do without
breaking API (and performance).
So, just be clear about that to warn future readers.
It's tracked in https://github.com/containers/storage/issues/1379 .
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
We can't safely do that because the read-only callers don't allow us
to write to layerStore state.
Luckily, with the recent changes to Mounted, we don't really need to
reload in those places.
Also, fairly extensively document the locking design or implications
for users.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead of reading that value, releasing the mount lock,
and then unmounting, provide a "conditional" unmount mode.
And use that in the existing "loop unmounting" code.
That's at least safer against concurrent processes unmounting
the same layer. But the callers that try to "really unmount"
the layer in a loop are still possibly racing against other processes
trying to mount the layer in the meantime.
I'm not quite sure that we need the "conditional" parameter as an
explicit choice; it seems fairly likely that Umount() should just fail
with ErrLayerNotMounted for all !force callers. I chose to use the flag
to be conservative WRT possible unknown constraints.
Similarly, it's not very clear to me that the unmount loops need to exist;
maybe they should just be unmount(force=true, conditional=true).
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
The lockfile we use propertly handles the case that we Touch() it. In
other words, a later Modified() call will return false.
However, we're also looking at the mtime, which was failing. This
uses the new AtomicWriteFileWithOpts() feature to also record the
mtime of the file we write on updates.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This was using the graphDriver field without locks, and the graph driver itself,
while the implementation assumed exclusivity.
Luckily all callers are actually holding the layer store lock for writing, so
use that for exclusion. (layerStore already seems to extensively assume
that locking the layer store for writing guarantees exclusive access to the graph driver,
and because we always recreate a layer store after recreating the graph driver,
that is true in practice.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Only update recorded LastWrite values _after_ we succesfully reload
container/image/layer stores; so, if the reload fails, the functions
will keep failing instead of using obsolete (and possibly partially loaded
and completely invalid) data.
Also, further improve mount change tracking, so that if layerStore.load()
loads it, we don't reload it afterwards.
This does not change the store.graphLock users; they will need to be cleaned up
more comprehensively, and separately, in the future.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
If the mounts file is changed, don't reload all of the layers
as well. This is more efficient, and it will allow us to better
track/update r.lastWrite and r.mountsLastWrite in the future.
Exiting code calling reloadMountsIfChanged() indicates that this
must already be safe.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
They are a shared state across all users of the *LockFile in the process,
and therefore incorrect to use for any single consumer for change tracking.
Direct users to user the new GetLastWrite, ModifiedSince, and RecordWrite,
instead, and convert all c/storage users.
In addition to just being correct, the new API is also more efficient:
- We now initialize stores with GetLastWrite before loading state;
so, we don't trigger an immediate reload on the _second_ use of a store,
due to the problematic semantics of .Modified().
- Unlike Modified() and Touch(), the new APi can be safely used without
locking *LockFile.stateMutex, for a tiny speedup.
The conversion is, for now, trivial, just immediately updating the lastWrite
value after the ModifiedSince call. That will get better.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
- Don't exit after saving only one of the locations
- Only touch the lock once if saving both of the locations
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This looks in the container store for existing data dirs with ids not in
the container files and removes them. It also adds an (optional) driver
method to list available layers, then uses this and compares it to the
layers json file and removes layers that are not references.
Losing track of containers and layers can potentially happen in the
case of some kind of unclean shutdown, but mainly it happens at reboot
when using transient storage mode. Such users are recommended to run
a garbage collect at boot.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
We store information about the layers in two files, layers.json and
volatile-layers.json. This allows us to treat the two stores
differently, for example saving the volatile json with the NoSync
option, which is faster (but less robust).
In normal mode we store only layers for the containers that are marked
volatile (`--rm`), as these are not expected to persist anyway. This way
informantion about such containers are faster to save, and in case
of an unclean shutdown we only risk loss of information about other
such volatile containers.
In transient mode all container layers (but not image layers) are
stored in the volatile json, and additionally it is stored in tmpfs.
This improving performance as well as automatically making sure no
container layers are persisted across boots.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
... instead of failing for duplicate names, and instead of
ignoring the "incomplete" state of layers.
Try up to 3 times if other writers are creating inconsistent
state in the meantime.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
- In read-write stores, fail even readers if there are incomplete
layers. Right now that would increase observed failures (OTOH
with better consistency), but we'll fix that again soon
- Decide whether to save read-write stores strictly based on
the need to clean up, instead of cleaning up opportunistically
(which is less predictable).
- Correctly return the right error, depending on whether there
are duplicate layers
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... as an error value instead of just a boolean. That
will allow extending the logic to more kinds of inconsistencies,
and consolidates the specifics of the inconsistency knowledge
into a single place.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... instead of combining control flow branches; we will change
behavior on some of them.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
.. to be closer to the lock / load set of methods.
Only moves unchanged code, should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
They are now only used in the constructors, so use a variant
of startReading/startWriting instead. This code path is not
performance-critical, so let's share as much code as possible
to ensure consistency in locking.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It is done implicitly by all writers already.
Also fix the documentation not to point at an explicit Touch(),
which is not actually necessary.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Exposing the internals of the lock is not necessary, and exposes
too many implementation details.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This integrates ReloadIfChanged, and makes it clearer that the responsibility
for maintaining locking details is with the layerStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This integrates ReloadIfChanged, and makes it clearer that the responsibility
for maintaining locking details is with the layerStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead of going
3 store methods
store.updateNames+enum
3 sub-store methods
subStore.updateNames+enum
applyNameOperation+enum,
simplify to
3 store methods
store.updateNames+enum
subStore.updateNames+enum
applyNameOperation+enum,
Should not change behavior. Looking purely at updateNameOperation,
invalid values would now be detected after doing more work,
but there is no way for an external caller to trigger use of
an invalid value anyway.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
There is no public way to obtain implementations of these interfaces; so
replace the public interfaces with private ones, to make it clear that we
_can_ modify them.
For formal API compatibility, preserve the old interface definitions
as copies.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>