db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Olav Sandstaa <Olav.Sands...@Sun.COM>
Subject Re: Releasing latches when waiting for locks. When and why?
Date Mon, 11 Dec 2006 11:14:27 GMT
Knut Anders Hatlen wrote:
> To see the effect of this change, I tested the patch on a dual-CPU
> machine with the test client from DERBY-1961 running single-record
> select operations. Derby was running in embedded mode, and the entire
> database was in the page cache. The results for 1 to 100 concurrent
> clients compared to the code in trunk are shown in the attached graph
> (latch.png).
> For one client, there was not much gained, but for two clients, the
> throughput increased 20% compared to trunk. For three clients, the
> increase was 40%, and it was 145% for 30 clients. This was a lot more
> than I expected! I also ran a TPC-B like test with 20 clients and saw
> a 17% increase in throughput (disk write cache was enabled).

This is impressive, and much higher than what I would have expected by 
just moving the latching out of the lock manager.

> I would guess that the improvement is mainly caused by
>   a) Less contention on the lock table since the latches no longer
>      were stored in the lock table.
>   b) Less context switches because the fair queue in the lock manager
>      wasn't used, allowing clients to process more transactions before
>      they needed to give the CPU to another thread.

In addition I also would guess there is a third positive contribution to 
the increased performance by this patch:

  c) less CPU spent in the lock manager, particularly due to reduced use 
of hash tables.

> I hadn't thought about b) before, but I think it sounds reasonable
> that using a fair wait queue for latches would slow things down
> considerably if there is a contention point like the root node of a
> B-tree. I also think it sounds reasonable that the latching doesn't
> use a fair queue, since the latches are held for such a short time
> that starvation is not likely to be a problem.

I agree that b) is likely a major contributor to the performance 
improvement you see. Would this also be the case if you run in 
client-server mode? In client-server mode you would anyway get a context 
switch for each operation Derby do since the worker thread would block 
on network IO. Have you tried the same test to see what performance 
improvement you get when running in client-server mode with this patch?


View raw message