openjpa-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From PaulCB <>
Subject Re: Dirty Data Under Concurrency
Date Mon, 01 Dec 2008 07:55:38 GMT

Hi Milosz,

Yes, I'm using InnoDB. I'll try the select for update (I didn't know how
to get JPA to do that ;-)) My current strategy does work as the update
can only be done by thread 2 once thread 1 has committed:

> > 1) EJB finds the ACCOUNT row for the given account id using a simple
> > 2) Updates a field on the ACCOUNT row to get a lock on the row and
does em.persist and em.flush

Once 1 and 2 above are done by thread 2, then it reads the data in the
balance table and sees the updates done by thread 1. With repeatable
read, the read shows the data as it was prior to thread 1's commit,
while read_committed shows then new data


-----Original Message-----
From: MiƂosz Tylenda (via Nabble) <ml-user>
Reply-to: Post 1594665 on Nabble <ml-node>
To: PaulCB <>
Subject: Re: Dirty Data Under Concurrency
Date: Sun, 30 Nov 2008 00:46:11 -0800 (PST)

Good it now works. However it seems weird to me that it was this change
which helped. Do you use InnoDB tables? In MySQL READ COMMITTED is a
lower isolation level than REPEATABLE READ. I am wondering how could
lowering the isolation level improve the consistency? 

In this case I would try SELECT FOR UPDATE (EntityManager.lock in JPA
terms) - that would cause the 2nd thread to wait until the 1st does its
job and execute find+calculate+persist without losing anything. 

View this message in context:
Sent from the OpenJPA Users mailing list archive at

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message