We can't safely do that because the read-only callers don't allow us
to write to layerStore state.
Luckily, with the recent changes to Mounted, we don't really need to
reload in those places.
Also, fairly extensively document the locking design or implications
for users.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Instead of reading that value, releasing the mount lock,
and then unmounting, provide a "conditional" unmount mode.
And use that in the existing "loop unmounting" code.
That's at least safer against concurrent processes unmounting
the same layer. But the callers that try to "really unmount"
the layer in a loop are still possibly racing against other processes
trying to mount the layer in the meantime.
I'm not quite sure that we need the "conditional" parameter as an
explicit choice; it seems fairly likely that Umount() should just fail
with ErrLayerNotMounted for all !force callers. I chose to use the flag
to be conservative WRT possible unknown constraints.
Similarly, it's not very clear to me that the unmount loops need to exist;
maybe they should just be unmount(force=true, conditional=true).
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
store.imageStore is always initialized, and never nil,
after the store is initialized, so store.getImageStore()
amounts to a single field access.
So just do that, and eliminate the error path and the local variables
all over; just use s.imageStore directly.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
store.imageStore and store.roImageStores are always initialized, and never nil,
after the store is initialized, so store.getROImageStores()
amounts to a single field access.
So just do that, and eliminate the error path and the local variables
all over; just use s.roImageStores directly.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
store.containerStore is always initilialized, and never nil,
after the store is initialized, so store.getContainerStore()
amounts to a single filed access.
So just do that, and eliminate the error path and the local variables
all over; just use s.containerStore directly.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... to add an ...UseGetters suffix, to warn against direct use.
This is not currently necessary, but we will encourage direct
use of the container and image store fields, so the asymmetry vs.
layer store objects needs to be warned against.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This was using the graphDriver field without locks, and the graph driver itself,
while the implementation assumed exclusivity.
Luckily all callers are actually holding the layer store lock for writing, so
use that for exclusion. (layerStore already seems to extensively assume
that locking the layer store for writing guarantees exclusive access to the graph driver,
and because we always recreate a layer store after recreating the graph driver,
that is true in practice.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This allows to get all the layer stores with a single s.graphLock access.
We do it even in store.CreateContainer, where the read-only layers
are only necessarily conditionally, because getting the read-only
layers is almost always a simple field access, so not having to lock
graphLock again is almost certainly the improvement.
Stand-alone store.getROLayerStores is now unused and has been removed.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... and use it in store.Diff to only lock graphLock once,
and to use a fresh layerStore object instead of an obsolete one.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... and use it in callers that already obtain the graphLock.
This is both an optimization, and a correctness fix: if we need to
reload the graph driver, we now don't do the operation on an obsolete
layerStore instance.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
It's now the only caller, so:
- inline it, eliminating an always-false graphDriver != nil check
- only update graphLockLastWrite after we successfully reload.
s.graphDriver is now never nil during the lifetime of store
(but it is only safe to access with graphLock held.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... to avoid the repetitive checks for store.graphLock.ModifiedSince.
store.getROLayerStores now checks for graphLock.ModifiedSince, we'll optimize
that away again.
store.Shutdown now checks for graphLock.ModifiedSince, that seems like
a good idea anyway.
Also attempt to document the purpose and rules of using graphLock;
it's quite likely incomplete but at least it's a starting point.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
That way we don't lock store.graphLock twice, and we will have
only one location that doesn't use the shared "lock and reload"
utility for using store.graphLock.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
At least https://github.com/containers/storage/pull/926 suggests that
using timestamps "seems to fail", without elaborating further.
At the very least, ModifiedSince is the more general check, because it
can work across shared filesystems or on filesystems with bad timestamp
granularity, it's about as costly (a single syscall, pread vs. fstat),
and we can now completely eliminate tracking store.lastLoaded.
The more common code shape will also help factor the common code out further.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Don't modify store.graphDriverName on reloads, so that it is constant
throughout the life of the store, and it can be accessed without locking,
as it has actually been done (to this point incorrectly).
That requires special-casing the initial load(); so, split
the actual driver creation into a createGraphDriverLocked() method.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This might not be entirely correct, but _some_ attempt to set rules
must be better than nothing.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
They are a shared state across all users of the *LockFile in the process,
and therefore incorrect to use for any single consumer for change tracking.
Direct users to user the new GetLastWrite, ModifiedSince, and RecordWrite,
instead, and convert all c/storage users.
In addition to just being correct, the new API is also more efficient:
- We now initialize stores with GetLastWrite before loading state;
so, we don't trigger an immediate reload on the _second_ use of a store,
due to the problematic semantics of .Modified().
- Unlike Modified() and Touch(), the new APi can be safely used without
locking *LockFile.stateMutex, for a tiny speedup.
The conversion is, for now, trivial, just immediately updating the lastWrite
value after the ModifiedSince call. That will get better.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This looks in the container store for existing data dirs with ids not in
the container files and removes them. It also adds an (optional) driver
method to list available layers, then uses this and compares it to the
layers json file and removes layers that are not references.
Losing track of containers and layers can potentially happen in the
case of some kind of unclean shutdown, but mainly it happens at reboot
when using transient storage mode. Such users are recommended to run
a garbage collect at boot.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
We store information about the layers in two files, layers.json and
volatile-layers.json. This allows us to treat the two stores
differently, for example saving the volatile json with the NoSync
option, which is faster (but less robust).
In normal mode we store only layers for the containers that are marked
volatile (`--rm`), as these are not expected to persist anyway. This way
informantion about such containers are faster to save, and in case
of an unclean shutdown we only risk loss of information about other
such volatile containers.
In transient mode all container layers (but not image layers) are
stored in the volatile json, and additionally it is stored in tmpfs.
This improving performance as well as automatically making sure no
container layers are persisted across boots.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This splits up the containers.json file in the containers store into
two, adding the new file `volatile-containers.json`. This new file is
saved using the NoSync options, which is faster but isn't robust in
the case of an unclean shutdown.
In the standard case, only containers marked as "volatile" (i.e. those
started with `--rm`) are stored in the volatile json file. This means
such containers are faster, but may get lost in case of an unclean
shutdown. This is fine for these containers though, as they are not
meant to persist.
In the transient store case, all containers are stored in the volatile
json file, and it (plus the matching lock file) is stored on runroot
(i.e. tmpfs) instead of the regular directory. This mean all
containers are fast to write, and none are persisted across boot.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
They don't, as the code assumes, test whether the store
is writable; they return true for read-only stores as well.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Move the createMappedLayer logic completely inside this
function, to make it self-contained.
Passing the rwImageStore will remove the need for a cast
and clarify the logic.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Don't immediately unlock the layer store after locking it.
Luckily it only matters for additional stores.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Don't create "mounts" or "tmp" subdirectories under the graph root at
startup, since we ended up not using them.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
This integrates ReloadIfChanged, and makes it clearer that the responsibility
for maintaining locking details is with the layerStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This integrates ReloadIfChanged, and makes it clearer that the responsibility
for maintaining locking details is with the layerStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... for a bit of extra safety. That requires us to be a bit
more explicit in one of the users.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This integrates ReloadIfChanged, and makes it clearer that the responsibility
for maintaining locking details is with the imageStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This integrates ReloadIfChanged, and makes it clearer that the responsibility
for maintaining locking details is with the imageStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This integrates ReloadIfChanges, and makes it clearer that the responsibility
for maintaining locking details is with the containerStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This integrates ReloadIfChanges, and makes it clearer that the responsibility
for maintaining locking details is with the containerStore; we can change it
in a single place.
Should not change behavior.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>