Return-Path: Delivered-To: apmail-db-derby-dev-archive@www.apache.org Received: (qmail 26057 invoked from network); 20 Nov 2006 23:20:13 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 20 Nov 2006 23:20:13 -0000 Received: (qmail 21663 invoked by uid 500); 20 Nov 2006 23:20:21 -0000 Delivered-To: apmail-db-derby-dev-archive@db.apache.org Received: (qmail 21624 invoked by uid 500); 20 Nov 2006 23:20:21 -0000 Mailing-List: contact derby-dev-help@db.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: Delivered-To: mailing list derby-dev@db.apache.org Received: (qmail 21588 invoked by uid 99); 20 Nov 2006 23:20:21 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Nov 2006 15:20:21 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests=UNPARSEABLE_RELAY X-Spam-Check-By: apache.org Received-SPF: pass (herse.apache.org: local policy) Received: from [192.18.1.36] (HELO gmp-ea-fw-1.sun.com) (192.18.1.36) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Nov 2006 15:20:08 -0800 Received: from d1-emea-09.sun.com ([192.18.2.119]) by gmp-ea-fw-1.sun.com (8.13.6+Sun/8.12.9) with ESMTP id kAKNJlqY016740 for ; Mon, 20 Nov 2006 23:19:47 GMT Received: from conversion-daemon.d1-emea-09.sun.com by d1-emea-09.sun.com (Sun Java System Messaging Server 6.2-6.01 (built Apr 3 2006)) id <0J9100H01Y0XF600@d1-emea-09.sun.com> (original mail from Knut.Hatlen@Sun.COM) for derby-dev@db.apache.org; Mon, 20 Nov 2006 23:19:47 +0000 (GMT) Received: from localhost ([193.71.105.147]) by d1-emea-09.sun.com (Sun Java System Messaging Server 6.2-6.01 (built Apr 3 2006)) with ESMTPSA id <0J9100MFAY4Y0K35@d1-emea-09.sun.com> for derby-dev@db.apache.org; Mon, 20 Nov 2006 23:19:47 +0000 (GMT) Date: Tue, 21 Nov 2006 00:17:42 +0100 From: Knut Anders Hatlen Subject: Re: Releasing latches when waiting for locks. When and why? In-reply-to: Sender: Knut.Hatlen@Sun.COM To: derby-dev@db.apache.org Message-id: Organization: Sun Microsystems MIME-version: 1.0 Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT References: <455E4FDF.5030600@amberpoint.com> <4561FE49.2000901@sbcglobal.net> <20061120194538.GG22233@trapper.coreless.net> User-Agent: Gnus/5.110006 (No Gnus v0.6) Emacs/21.4 (usg-unix-v) X-Virus-Checked: Checked by ClamAV on apache.org Dyre.Tjeldvoll@Sun.COM writes: > Anders Morken writes: > >> We've taken some long hard looks at the lock manager, and we're >> convinced that there's a lot of potential for improvement in Derby lock >> management. Not that there's anything wrong with the approach taken in >> the current implementation - correctness first, optimization later. =) >> >> One of the things that we've observed is that on a single-row update >> transaction, a shared lock is obtained on a row in the B-tree container >> just before an exclusive lock is obtained on a row in the data >> container. Off the top of my head I can't remember if why figured out >> exactly why. =) >> >>> Having said that it would be interesting if someone had time to >>> implement a higher performance latch implementation and plug it in >>> and see how much it helps. It would decrease the total time spent >>> in lock manager. > > I think Knut Anders has been playing with that idea, so if you're not > doing it yourself I'm sure he (we) would love to hear about any ideas > you may have. Well, my ideas are pretty vague at the moment. The ideas I have played with so far (see also DERBY-1704) are: 1) Split the hash table in LockSet into multiple hash tables to reduce the monitor contention. This would reduce monitor contention for both locking and latching, but wouldn't make any of them cheaper in the single-client case. 2) Eliminating the hash table in SinglePool by having a direct pointer from a compatibility space to its locks. This would reduce monitor contention and CPU usage for locking, but not for latching. To reduce the cost of latching, we would at least need to move the latches out of the lock table (LockSet). One way could be to change the interface and separate between lockable and latchable objects. As it is now, both lockable and latchable objects implement the Lockable interface, and the LockFactory interface says that two Lockables represent the same lockable object if their equals() methods return true. This is fine for row locking, but for latching it is unnecessary overhead because the objects that are latched have no equals() method and therefore must be the exact same object. If we instead had a Latchable interface which were simpler than the Lockable, and which required reference equality in order to represent the same latchable object, I think we could get closer to the "just set a flag on the page" idea. >> We've actually considered doing that for fun, but as our goal right now >> is to produce a big honkin' report, not fix anything, we haven't. Still, >> it would probably give a quite significant benefit. > > Reports are a fact of life :) > >> Using a Java profiler on a reasonably modern X86 system we've found that >> while acquiring a Derby lock takes about 60-70 microseconds and a latch >> costs 50-60 microseconds in the non-contended (single thread) cases, >> acquiring and releasing a ReentrantLock available in Java 1.5 and up >> costs about 1,5 microseconds. A latch could in theory be about as cheap >> as a ReentrantLock, and on Java 1.5 and up it would make sense to base >> Latches on ReentrantLocks. > > Interesting observation! But unfortunately not available for Derby for a long > time :( If the interface of the lock manager is changed to be more in line with the new concurrency utilities, we could have different lock managers for JDK < 1.5 and JDK >= 1.5, and let the 1.5 lock manager (or at least the latching) just be a thin wrapper around ReentrantLock. Don't know if that is feasible. >> However, just separating the latch data structures from the lock data >> structures could give significant scalability benefits, as right now the >> latches and locks are all stored in the same hash tables, causing >> significant monitor contention in multithreaded cases. > > Absolutely. In the multi-core world it is essential to keep the amount > of shared data at an absolute minimum > >> I dunno if our report would be of any interest to you guys, but it's not >> finished yet, and I'm not sure if it appropriate to post it just yet. >> Still, it's interesting stuff we're digging up, and it's interesting to >> see how you guys investigate this. =) > > I'd love to read it, and I expect others in the Derby community would > too. Indeed! It sounds like you have some very interesting numbers, and I'm looking forward to reading your report! -- Knut Anders