openjpa-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Philip Aston <>
Subject Re: Is the implementation of lock(LockModeType.READ) correct?
Date Sun, 15 Feb 2009 09:22:39 GMT

Pinaki - I agree with David that you're not addressing the purpose of read

Here's my opinion:

1. Any locking mechanism that is prone to race conditions (no matter
how rare) is useless.

2. Given this, for read locks to add distinct value to that provided
by write locks, I think you have to interpret the specification as I
have above. Of course, the specification does allow an implementation
to wimp out and use write locks when read locks are requested.
Personally, I don't see the ambiguity in the specification, but we're
only going to resolve our disagreements about its intention by
conferring with the specification authors.

3. Implementing read locks using SELECT FOR UPDATE is not appropriate
due to the risk of deadlock that David identifies. A better approach
would be to re-order the optimistic lock checks that OpenJPA currently
does to be done after any INSERT or UPDATE statements for the
transaction. This fulfils the goal of ensuring that the effects of
the transaction were calculated based on a consistent view. Or you
could do what EclipseLink does - issue an UPDATE to the read locked
row to set the version column to itself - I prefer reordering locks.

4. Read locks *do* provide different, valuable behaviour compared to
write locks. Firstly, with write locks you have to flush in order for
them to become effective (--performance); secondly only one
transaction can hold a write lock - the rest are blocked. Many
transactions can hold read locks, so the difference can be seen as
"read locks are optimistic, write locks are pessimistic".

- Phil

dezzio wrote:
> Hi Pinaki,
> I think the issue is whether a LockModeType.READ holds for the entire 
> transaction (the subject tx), from the moment that the lock is obtained 
> until the moment when the transaction has successfully committed.  By 
> "hold", I mean that either another tx cannot successfully commit a 
> change to an object that the subject tx has locked until the subject's 
> tx ends, or that the subject tx will fail if another transaction has 
> successfully committed a change prior to the subject tx's end.
> In the case of the OpenJPA implementation and the time sequence under 
> discussion, the lock would hold if the implementation obtained a 
> database row level lock (SELECT FOR UPDATE) when it checked the locked 
> object's version.
> A peripheral question is whether the spec requires that a read lock hold 
> for the entire tx (as defined above.)  If it does, it certainly doesn't 
> test for that compliance, and OpenJPA is not in compliance.
> A clear downside to locking the row when checking the version for a read 
> lock is that two or more transactions with no incompatible changes but a 
> variety of read locks for unchanged objects could end up in deadlock.
> My take is that the tradeoff is worthwhile, especially since 
> LockModeType.WRITE will give the consistency desired.  SFAIK, there are 
> not a lot of implementation options to make a read lock hold as defined. 
>   However, the expert group, currently discussing lock mode types, 
> should make clear exactly what can be expected for all lock modes, and 
> have TCK tests to ensure compliance.  Intentional ambiguity in a spec is 
> like infidelity in a marriage: it's a knife in the heart of reasonable 
> expectations.
> Cheers,
> David
> Pinaki Poddar wrote:
>> The expressed view relates to Philip's use case by his own observation:
>>> It "works" if I run with non-enhanced classes, since then there is no
>>> change detection and all rows get > written and version checked.
>> The point is the way OpenJPA decides what is flushed in a commit is not
>> the
>> entire set {A,B,C} but only {B,C} that are dirty. So 'transaction
>> consistency' is ensured but not 'database consistency' because another
>> transaction may have committed {A,B}. And that breaks the parity
>> invariance
>> of the entire set {A,B,C}. 
>> dezzio wrote:
>>> Hi Pinaki,
>>> Actually, much as I like your concepts, I don't yet see how they 
>>> illuminate the issue.
>>> Cheers,
>>> David
>>> Pinaki Poddar wrote:
>>>> Hi David & Philip,
>>>>    I have not had the time to pay attention to the use case it deserves
>>>> --
>>>> but reading the case brings up certain aspects that I will like to
>>>> share
>>>> with your experience.
>>>>   This interesting use case can fail and is failing. But the issue it
>>>> reveals goes beyond locking semantics. 
>>>> Behavior of lock is described at datum level -- the levels of warranty
>>>> for 
>>>> shared access/mutation to a *single datum* in a consistent way. 
>>>> Transaction goes to the next stage and describes the level of warranty
>>>> of
>>>> a
>>>> set of datum as an atomic 'unit of work'. 
>>>> But this test case demands a even higher level of warranty --
>>>> consistency
>>>> or
>>>> invariance of a set-based property (in this case the odd-even parity of
>>>> 3
>>>> instances), which is neither the property of an individual datum nor
>>>> the
>>>> property of a unit of work.
>>>> Of course, optimism of optimistic transaction model results in weaker
>>>> warranty of set-based invariance. To ensure set-based property
>>>> invariance, a
>>>> transaction must commit all 3 instances (with consistent odd-even
>>>> parity)
>>>> as
>>>> a unit of work, but what it does is that it reads {A,B,C} and writes
>>>> only
>>>> {B, C}.  
>>>> I will refrain from describing what flags of which OpenJPA
>>>> configuration
>>>> property can be tweaked to get there because let me first hear your
>>>> comments
>>>> on the expressed views in this posts.  
>>>> Philip Aston wrote:
>>>>> Hi David,
>>>>> Thanks for confirming this. So to summarise where we are, we have:
>>>>>  1. A reasonable use case that can fail with some unlucky timing.
>>>>>  2. A technical test case demonstrating the problem that does not rely
>>>>> on unlucky timing.
>>>>>  3. A disagreement in our readings of whether 1 and 2 are spec.
>>>>> compliant. Personally, I don't share your reading of the spec. In my
>>>>> reading, read locks are safe and provide a concrete guarantee that if
>>>>> locked entity is changed by another transaction, the locking
>>>>> transaction
>>>>> will not complete.
>>>>> (This is a different QoS compared to a write lock - if a write lock is
>>>>> obtained and the pc flushed, the transaction knows that it will not
>>>>> fail
>>>>> due to another transaction updating the locked entity. Read locks are
>>>>> "more optimistic" and can support higher concurrency if there is
>>>>> minimal
>>>>> contention - many transactions can hold read locks, only one can hold
>>>>> right locks.).
>>>>> How can I convince you to change your interpretation of the spec?
>>>>> Anyone
>>>>> else have an opinion?
>>>>> FWIW, EclipseLink passes the test case.
>>>>> - Phil
>>>>> dezzio (via Nabble) wrote:
>>>>>> Hi Philip,
>>>>>> Let's take a closer look.
>>>>>> We have two bank accounts, Account[1] and Account[2], shared
>>>>>> jointly by customers Innocent[1] and Innocent[2]. The bank's
>>>>>> business rule is that no withdrawal can be made the draws
>>>>>> the combined total of the accounts below zero. This rule is
>>>>>> enforced in the server side Java application that customer's
>>>>>> use.
>>>>>> At the start of the banking day, the accounts stand at:
>>>>>>      Account[1] balance 100.
>>>>>>      Account[2] balance 50.
>>>>>> Innocent[1] wants to draw out all the money, and asks the
>>>>>> application to take 150 from Account[1]. Innocent[2] also
>>>>>> wants to draw out all the money, and asks the application to
>>>>>> take 150 from Account[2]. By itself, either transaction
>>>>>> would conform to the bank's business rule.
>>>>>> The application implements the withdrawal logic by doing the
>>>>>> following for each transaction.
>>>>>> For Innocent[1], read Account[1] and Account[2]. Obtain a
>>>>>> read lock on Account[2]. Refresh Account[2]. Deduct 150 from
>>>>>> Account[1]. Verify business rule, result, sum of balances =
>>>>>> 0. Call JPA commit.
>>>>>> For Innocent[2], read Account[1] and Account[2]. Obtain a
>>>>>> read lock on Account[1]. Refresh Account[1]. Deduct 150 from
>>>>>> Account[2]. Verify business rule, result, sum of balances =
>>>>>> 0. Call JPA commit.
>>>>>> Within JPA commit, as seen over the JDBC connections, the
>>>>>> following time sequence occurs. (Other time sequences can
>>>>>> yield the same result.)
>>>>>> Innocent[1]: Check version of Account[2]: passes.
>>>>>> Innocent[2]: Check version of Account[1]: passes.
>>>>>> Innocent[2]: Update balance of Account[2], withdraw 150,
>>>>>>                  setting balance to -100: does not block.
>>>>>> Innocent[2]: commit: successful
>>>>>> Innocent[2]: Receives 150.
>>>>>> Innocent[1]: Update balance of Account[1], withdraw 150,
>>>>>>                  setting balance to -50: does not block.
>>>>>> Innocent[1]: commit: successful.
>>>>>> Innocent[1]: Receives 150.
>>>>>> After the two transactions:
>>>>>> Account[1]: balance -50
>>>>>> Account[2]: balance -100
>>>>>> Clearly the bank would not be happy. What's a developer to
>>>>>> do?
>>>>>> I think the developer needs an education about what is meant
>>>>>> by the JPA spec. What JPA is guaranteeing is that when JPA
>>>>>> commit is called, the objects with read locks will have
>>>>>> their versions checked. The objects with write locks will
>>>>>> have their versions checked and changed. The objects that
>>>>>> have been modified will have their versions checked, their
>>>>>> information updated, and their versions changed. Clearly all
>>>>>> of these rules were enforced in the above example.
>>>>>> If the developer had used write locks, both transactions
>>>>>> would not have succeeded. In fact, for the above example and
>>>>>> a similar time sequence, if write locks had been used in
>>>>>> place of read locks, there would have been deadlock.
>>>>>> Now, if in fact, I'm wrong about my interpretation of the
>>>>>> JPA spec (and it wouldn't be the first time) then you have a
>>>>>> case. I'd be curious to know whether other JPA
>>>>>> implementations pass your elegant test case, and what they
>>>>>> are doing differently that makes it so.
>>>>>> Also, if I am wrong about my interpretation, then the JPA
>>>>>> TCK needs a test case that will snag this failure, because
>>>>>> OpenJPA passes the current JPA TCK.
>>>>>> Cheers,
>>>>>> David
>>>>>> Philip Aston wrote:
>>>>>>> Oh yeah - my bad. Try this one instead:
>>>>>>> Suppose there are a set of Accounts, and a business rule that
>>>>>>> that the net balance must be positive.
>>>>>>> Innocent wants to draw down on Account 1 as far as possible.
It read
>>>>>>> locks the of Accounts, sums up the value, and and subtracts the
>>>>>>> positive total from Account 1. Innocent begins its commit, and
>>>>>>> read locks are validated.
>>>>>>> Meanwhile InnocentToo does the same for Account 2, and commits.
>>>>>>> Innocent updates Account 1 and finishes its commit.
>>>>>>> The total in account summary is now negative, violating the business
>>>>>>> rule. If read locks worked as I think they should, Innocent would
>>>>>>> have
>>>>>>> received an OptimisticLockException.
>>>>>>> dezzio wrote:
>>>>>>>> Hi Philip,
>>>>>>>> When two transactions read the same version of AccountSummary,
>>>>>>>> cannot successfully update its sum.  Only one will successfully
>>>>>>>> commit.
>>>>>>>> David

View this message in context:
Sent from the OpenJPA Users mailing list archive at

View raw message