jackrabbit-oak-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcel Reutegger <mreut...@adobe.com>
Subject RE: When optimistic locking fails
Date Thu, 07 Mar 2013 12:10:54 GMT
Hi,

> When encountering a case where the optimistic locking mechanism can't
> push a commit through in say one second, instead of waiting for a
> longer while I'd have the SegmentMK fall back to pessimistic locking
> where it explicitly acquired a hard lock on the journal and does the
> rebase/hook processing one more time while holding that lock. This
> guarantees that all commits will go through eventually (unless there's
> a conflict or a validation failure), while keeping the benefits of
> optimistic locking for most cases. And even for scenario 1 the bulk of
> the commit has already been persisted when the pessimistic locking
> kicks in, so the critical section should still be much smaller than
> with Jackrabbit 2.x where the lock is held also while the change set
> is being persisted.

this sounds good to me. as the cost increases to abort a transaction
the system should be allowed to make a compromise on throughput.

on the other hand I'm not sure this is really a good solution for the
second case. wouldn't the system quickly degrade into a serialized
execution with write locks on the journal?

one of the ideas I had in mind for the first MongoMK was to use
a scheduler, at least per MicroKernel instance. responsibilities
of the scheduler are 1) schedule commits and batch them together
to reduce updates on the journal. this should increase throughput
for concurrent writes in a single instance case. 2) coordinate journal
commits with other MicroKernel instances. maybe this is where
a fallback to pessimistic locking could happen as well...

an open question is how commits with conflicts are treated in the
batch update case.

regards
 marcel


Mime
View raw message