db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David W. Van Couvering" <David.Vancouver...@Sun.COM>
Subject Re: [jira] Updated: (DERBY-704) Large page cache kills initial performance
Date Mon, 14 Nov 2005 19:40:57 GMT
There appear to be definite improvements here in terms of the 
consistency of throughput and CPU usage.

Are the regular massive spikes in throughput and CPU usage related to 
the issues that Oystein was raising about checkpointing?



Knut Anders Hatlen (JIRA) wrote:
>      [ http://issues.apache.org/jira/browse/DERBY-704?page=all ]
> Knut Anders Hatlen updated DERBY-704:
> -------------------------------------
>     Attachment: throughput.png
>                 cpu-usage.png
> Attached graphs showing the throughput and CPU usage running with and
> without the patch. The graphs show the average of seven runs,
> throughput and CPU usage were reported every 30 seconds.
> 12 clients were running an update-intensive load on the database. The
> database had 10 GB of user data, and the page cache size was 512
> MB. The CPU usage is relatively low because the tests were run on an
> 8-CPU machine.
> The first 30 minutes the CPU usage was higher and the throughput
> significantly lower without the patch than with the patch.
>>Large page cache kills initial performance
>>         Key: DERBY-704
>>         URL: http://issues.apache.org/jira/browse/DERBY-704
>>     Project: Derby
>>        Type: Bug
>>  Components: Services, Performance
>>    Versions:,,,,,,,
>> Environment: All platforms
>>    Reporter: Knut Anders Hatlen
>>    Assignee: Knut Anders Hatlen
>>     Fix For:
>> Attachments: DERBY-704.diff, cpu-usage.png, derbyall_report.txt, throughput.png
>>When the page cache is large the performance gets lower as the page
>>cache is being filled. As soon as the page cache is filled, the
>>throughput increases. In the period with low performance, the CPU
>>usage is high, and when the performance increases the CPU usage is
>>This behaviour is caused by the algorithm for finding free slots in
>>the page cache. If there are invalid pages in the page cache, it will
>>be scanned to find one of those pages. However, when multiple clients
>>access the database, the invalid pages are often already taken. This
>>means that the entire page cache will be scanned, but no free invalid
>>page is found. Since the scan of the page cache is synchronized on the
>>cache manager, all other threads that want to access the page cache
>>have to wait. When the page cache is large, this will kill the
>>When the page cache is full, this is not a problem, as there will be
>>no invalid pages.

View raw message