db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Knut Anders Hatlen <Knut.Hat...@Sun.COM>
Subject Re: Releasing latches when waiting for locks. When and why?
Date Tue, 12 Dec 2006 12:35:07 GMT
Olav Sandstaa <Olav.Sandstaa@Sun.COM> writes:

> Knut Anders Hatlen wrote:
[...]
> This is impressive, and much higher than what I would have expected by
> just moving the latching out of the lock manager.
>
>> I would guess that the improvement is mainly caused by
>>
>>   a) Less contention on the lock table since the latches no longer
>>      were stored in the lock table.
>>
>>   b) Less context switches because the fair queue in the lock manager
>>      wasn't used, allowing clients to process more transactions before
>>      they needed to give the CPU to another thread.
>>   
>
> In addition I also would guess there is a third positive contribution
> to the increased performance by this patch:
>
>  c) less CPU spent in the lock manager, particularly due to reduced
> use of hash tables.

Yes, I believe this is true. I guess this is the main reason for the
increase in the one-client case (3-4%). In the test, each transaction
latches four pages (three index pages and one data page), so the patch
removes at least 8 hash accesses per transaction.

>> I hadn't thought about b) before, but I think it sounds reasonable
>> that using a fair wait queue for latches would slow things down
>> considerably if there is a contention point like the root node of a
>> B-tree. I also think it sounds reasonable that the latching doesn't
>> use a fair queue, since the latches are held for such a short time
>> that starvation is not likely to be a problem.
>>   
>
> I agree that b) is likely a major contributor to the performance
> improvement you see. Would this also be the case if you run in
> client-server mode? In client-server mode you would anyway get a
> context switch for each operation Derby do since the worker thread
> would block on network IO. Have you tried the same test to see what
> performance improvement you get when running in client-server mode
> with this patch?

I have run the same test in client/server mode, and the resulting
graph is attached. As you suggested, the improvement was not as big as
in embedded mode, but there's still more than 50% increase in the
throughput for 30 concurrent clients.

-- 
Knut Anders

Mime
View raw message