db-ojb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Nitz ...@ukr.net>
Subject Re: [PFE] locking via database for multi-VM environment
Date Sun, 21 Dec 2003 14:41:43 GMT
On Sunday 21 December 2003 10:51, Thomas Mahler wrote:
> > 1) Locks should be represented by one record per one locked object.
> > Such record should contain
> > - object ID
> > - ID of VM holding write lock and
> > - IDs for VMs that hold read locks
> > This means that the total number of VMs is limited by the number of
> > fields for reader's IDs, but we can make this limit configurable. It is
> > limited by the number of columns allowed by RDBMS. I think this is
> > acceptable way.
>
> two things:
> 1. how can be distinguish transaction running in the same VM in your
> approach? I firmly believe that we nee a mechanism to identify
> transactions across VMs and physical machines. That' why I propose to
> use GUIDs as a unique key for transactions!
This is just an optimization: one VM can handle locking by all transactions 
running in it, and all it need to know about transactions running in other 
VMs is whether they have at least one read lock and at least one write lock. 
So each VM can calculate these two aggregate values over all transactions and 
write them to database. This would essentially reduce the number of database 
operations.
More precisely, in OTM there will be a LockMap implementation (it has other 
sense than LockMap class in ODMG) which will track all locks for the given 
OTMKit. But since OTM allows multiple OTMKits in the same VM, there may be 
many LockMap instances. Thus lock record should contain LockMap IDs rather 
than VM IDs.

And I forgot to mention the timestamp field for the lock record.

> 2. I don't think that it is a good idea to have any kind of limit here.
> I know a lot of discussions about the locking mechanisms in Oracle vs.
> DB2. Once there is any fixed limit, people start to blame you for having
> a non scalable, not enterprise ready solution.
> So I'd recommend to avoid any kind of limit in this area.
First of all, please note that we will limit not the number of transactions, 
but the number of computers in a cluster (well, actually LockMaps). How many 
computers in a cluster can you imagine? I can hardly imagine 10 :)
I can't imagine more than 500 computers. 
500 is the limit for number of columns for DB2, other popular databases have 
greater limits. 
I propose to have the limit of 10 by default (or 16, or 32) and to describe in 
docs the way to increase it: alter table to add columns and change 
OJB.properties to increase the limit.

Okay, let's consider other variants.
1) use one long char field to store all readers (longvarchar, text, clob).
+ : simple DB stucture
- : such long fields are usually processed slower than "short" char fields 
with length <= 254, sometimes they take more storage space ( >= 2K per row 
for Sybase), sometimes need special JDBC tricks (Oracle CLOBs).
2) use two tables: 
first for object ID, writer VM ID and timestamp, 
second for object ID and reader VM ID 
We can't use one table as now in ODMG because of the 1st problem mentioned in 
my original post: different locks on the same object should modify the same 
database record, otherwise adding two write locks is possible, or adding 
write lock together with read lock.
+ : no limits
- : slower
3) don't use database at all, have simple lock manager with RMI interface, it 
should keep all locks in memory and automatically remove timed out locks. 
This will work much faster than database operations. If you take seriously 
the Prevayler's idea to keep all database records in memory, :o) 
it would be reasonable to assume that there is enough memory for locks.
+: faster, no explicit limits
-: implicitely limited by available memory


---------------------------------------------------------------------
To unsubscribe, e-mail: ojb-dev-unsubscribe@db.apache.org
For additional commands, e-mail: ojb-dev-help@db.apache.org


Mime
View raw message