db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Releasing latches when waiting for locks. When and why?
Date Mon, 20 Nov 2006 21:25:18 GMT

Anders Morken wrote:
> Mike Matrigali:
>>Knut Anders Hatlen wrote:
>>>Thanks Bryan, that could be the case. I have done some more
>>>investigation and it seems like the method is only called when a
>>>B-tree container is opened with a lock policy. It also seems like the
>>>B-tree containers always are opened with a null/NoLocking lock policy
>>>(regardless of isolation level) and therefore the method is never
>>Derby uses data only locking, so it doesn't get locks on rows in the
>>btree.  Instead it gets locks on the rows that the rows in the btree
>>point at.
> Well, this is exactly the stuff we (me and my partner in crime, Per
> Ottar, who lurks on this list but hasn't posted anything yet, afaik =)
> are investigating as part of our autumn project.
> We've taken some long hard looks at the lock manager, and we're
> convinced that there's a lot of potential for improvement in Derby lock
> management. Not that there's anything wrong with the approach taken in
> the current implementation - correctness first, optimization later. =)
> One of the things that we've observed is that on a single-row update
> transaction, a shared lock is obtained on a row in the B-tree container
> just before an exclusive lock is obtained on a row in the data
> container. Off the top of my head I can't remember if why figured out
> exactly why. =)
by single row update do you mean (obviously not exact syntax):
update x = y where key = z

update where current of in a cursor?

In the cursor case derby maintains an "internal" lock on the btree page
so that it can optimize scanning.  This lock allows the scan to maintain
it's position without extra overhead on each next on the page.
>>Having said that it would be interesting if someone had time to 
>>implement a higher performance latch implementation and plug it in
>>and see how much it helps.  It would decrease the total time spent
>>in lock manager.
> We've actually considered doing that for fun, but as our goal right now
> is to produce a big honkin' report, not fix anything, we haven't. Still,
> it would probably give a quite significant benefit. 
> Using a Java profiler on a reasonably modern X86 system we've found that
> while acquiring a Derby lock takes about 60-70 microseconds and a latch
> costs 50-60 microseconds in the non-contended (single thread) cases,
> acquiring and releasing a ReentrantLock available in Java 1.5 and up
> costs about 1,5 microseconds. A latch could in theory be about as cheap
> as a ReentrantLock, and on Java 1.5 and up it would make sense to base
> Latches on ReentrantLocks.
Do you have something to compare these numbers to.  Maybe compared to 
the overall cost of Derby inserting a single row (not comitting), on the 
same machine.  Basically would be interested in what is the current
locking overhead in cpu cost per some standard statement/xact.  I think
when I looked at this awhile back locking/latching together was 
somewhere around 5% of
a TPC/B like xact.  Given that, making locking 0 would only give a 5%
improvement, so never was that motivated to look at latching.  But
maybe other parts of the system/JVM have raised the ratio.

What java profiler do you use?  Do you like it for working on Derby?
> However, just separating the latch data structures from the lock data
> structures could give significant scalability benefits, as right now the
> latches and locks are all stored in the same hash tables, causing
> significant monitor contention in multithreaded cases.
> I dunno if our report would be of any interest to you guys, but it's not
> finished yet, and I'm not sure if it appropriate to post it just yet.
I would be very interested in seeing the report.

> Still, it's interesting stuff we're digging up, and it's interesting to
> see how you guys investigate this. =)

View raw message