db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matt Doran <matt.do...@papercut.biz>
Subject Re: Derby 10.3.1 performance (was [ANNOUNCE] Apache Derby released)
Date Wed, 15 Aug 2007 01:32:46 GMT
Olav Sandstaa <Olav.Sandstaa@...> writes:

> There were several performance improvements that went in to this 
> release. I probably do not remember all of them but here are at least some:
>   * Reuse of ResultSet (DERBY-827)
>   * New lock manager - CPU reduction mostly as a result of much less 
> synchronization (several JIRAs, eg. DERBY-1704)
>   * Move latching out of lock manager (DERBY-2107)
>   * BitSet manipulations (several JIRAs, e.g DERBY-2226, 2191)
>   * Reduced use of synchronization (several JIRAs, eg 2149, 2150)
> For some examples comparing the performance of Derby 10.2 and Derby 10.3 
> see for instance slide 29 in:
> or slide 30 in:
> http://home.online.no/~olmsan/publications/pres/jazoon07/
> For improvements for some other types of loads see also the results from 
> a nightly performance regression test which compares trunk to 10.2.2:
>    http://home.online.no/~olmsan/derby/perf/

Thanks for all the info.  The performance gains look impressive.

We have generally been very impressed with Derby's robustness and performance.
We use Derby in an embedded mode.  We have found some performance issues with
some more complex queries on large datasets.  We usually recommendlarger
customers run our product (PaperCut NG) on a native DB (like SQL Server,
Postgres, etc), because they seem to perform better on bigger datasets (maybe
due to having less memory constraints).

Admittedly we have not spent very much time analysing derby performance and
understanding why these queries are slow.  Given the impressive performance
shown in those presentations, our performance issues are probably an indication
that we have not tuned the Derby configuration or queries adequately.  When I
get some time I'll analyse these in more detail.

We do have some constraints when tuning Derby configuration.  Our product is
designed to be easy to install, and maintain ... and *must* work out of the box
on all systems.  This means we cannot allocate too much memory to the page cache
(which has a big impact on performance) by default ... because it could cause
out of memory on memory limited systems.  So our performance suffers on more
powerful systems in order to support the smaller ones.

It would be great if we could have a more dynamic way to define the page cache
size.  For example as a percentage of the maximum memory allocated to the JVM
(e.g. Runtime.getRuntime().maxMemory()).  This would allow us to say use 10% of
JVM memory for the page cache, which would provide some basic auto-tuning
depending on the available memory.   We configure the JVM itself to use a
percentage of available system memory (which works well at the application 

What do you think?

Thanks again for all your efforts!!


View raw message