db-ojb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Nitz ...@ukr.net>
Subject Re: [PFE] locking via database for multi-VM environment
Date Sun, 28 Dec 2003 17:01:36 GMT
Hi Thomas,

On Sunday 28 December 2003 10:37, Thomas Mahler wrote:
> > Okay, but don't forget about the implicit memory limit.
> Considering todays cheap hardware I don't see a real problem here. If we
>    get tight with RAM we turn on Swapping ;-)
Okay. The situation is similar to in-memory database for locking that we 
discussed with Armin.

> > And what would you say about the variant from my next message?
> > I'll repeat it below (with the bug fix applied :)
> > -------------------------------------------------------
> > one more variant with two sub-variants:
> >
> > The record should contain
> > - object ID
> > - the number write locks: 0 or 1
> > - the number of read locks: >=0
> > - timestamp
> How can we decide if the current transaction holds the write lock on a
> given object if we don't store this information?
The detailed info about locking should to be held by in-memory LockManagers 
(LockMaps) in each VM. The VM attempts to write to the database "writeLock=1 
WHERE writeLock=0 AND readLocks=0". The success of this operation means 
sucess of the lock. Of course this will work properly if the database works 
properly and doesn't allow such operation to two concurrent connections at 
the same time.

> > For write lock:
> > UPDATE LOCK_TABLE SET writeLocks=1
> >      WHERE oid=? AND writeLocks=0 AND readLocks=0
Actually the situation is more complicated: if the lock table doesn't contain 
record with the given oid, the SQL should be 
INSERT INTO LOCK_TABLE (oid, writeLocks, readLocks, timestamp) 
                VALUES (?, 1, 0, ?)
If we decide to remove lock record with writeLocks=0 AND readLocks=0, 
then write lock will always lead to INSERT.
But read lock may require either INSERT or UPDATE. So the algorithm becomes 
more complicated: try to INSERT, in case of failure try to UPDATE. So I begin 
to doubt that the algorithm 2 (with autoCommit=true) is faster. Now i think 
it's better to read a lock record first (holding database read lock on it), 
then update or insert, then commit the database transaction - "algorithm 1".

> > if there is a local cached copy of the object then do
> > SELECT timestamp FROM LOCK_TABLE WHERE oid=?
> > and compare the result to the local timestamp,
> > if changed, then re-read the object.
There is a problem here. If we use the cache, we need to use timestamp to 
decide whether an object in the cache is outdated. Therefore we need to keep 
the corresppondent lock record in the database even after all locks are 
released. Thus the number of lock records may grow until it reach the number 
of all records in all database tables - IMHO this may lead to performance 
degradation. Will you mention cheap RAM in this case too? :-)
Probably we need some BIG_TIME_LIMIT for cache and locks and periodically 
Note that this will remove not only the overaged locks, but also this will 
mark as outdated all cached objects that where modified BIG_TIME_LIMIT ago 
and earlier.
Another option is to disable cache in the multi-VM environment and to use 
timestamps only for removing overaged locks. This will allow to remove lock 
records with writeLocks=0 AND readLocks=0. Less records means better 

> BUT: Even such a fool-proove approach can fail! Some time ago I used
> such a mechanism to implement a sequence manager for an O/R layer in
> Delphi against a DB2 database.
> Everything worked fine when we used the IBM ODBC driver. Then we tried
> to use the Borland native DB2 driver as it was expected to be faster.
> It was much faster, but my "locking" where-clauses did not work any more!
Oh, that's bad news. Can you give more details about this? 

> As the qualitity of many JDBC drivers is not that high, I expect that
> there could be similar problems with certain JDBC-drivers....
That is why I am not sure about the "Algorithm 1": it uses simpler SQL 
statements, but it requires the correct support for READ_COMMITTED isolation 
level, and I am not sure that databases like MS Access support such things :(
And of course I can't be sure about all JDBC drivers, even if some database 
supports READ_COMMITTED correctly.

So, we should either write our own LockManager, or just verify that 
HypersonicSQL supports necessary SQL features properly and then recommend to 
OJB users to use HSQL for locking. Or to give them some LockTest and 
to recommend to run it against the database that they choose to use for 
locking table.

To unsubscribe, e-mail: ojb-dev-unsubscribe@db.apache.org
For additional commands, e-mail: ojb-dev-help@db.apache.org

View raw message