db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "vodarus vodarus" <voda...@gmail.com>
Subject Re: Speed of using Derby DB
Date Thu, 19 Jun 2008 14:33:47 GMT
>
> Just to be sure, you did recreate the tables?
> In any case, the page size would mostly help pull data in faster and that
> doesn't matter for this test.
>
>
>> What is the "Øysteins approach "? Can you write steps to get 2.4 seconds
>> time?
>>
>
> Øysteins approach is using the query "insert into testtotals select client,
> sum(order_amount) from testbig group by client;".
> As you state, this is not what you want in your case and it might not be
> applicable.

No this is test for non-pure-SQL  functions only.

>
> I could also get down to these times by using a HashMap to store the
> intermediate totals in 'calculateTotalCommon'. This does of course use more
> memory and might cause trouble if you don't know the number of clients in
> your table (i.e. whether you need 25 thousand or 100 million entries in the
> map).

also not applicable, because data amount is 10-100 times more than RAM.


It seems what happens is that the log buffer goes full. By increasing the
> log buffer, I was able to get a little better performance. As always with
> tuning, it's about balance and tradeoffs. If your IO system is really good,
> maybe you can run with a big log buffer and get better performance. However,
> the effect you see from this also depends on how often you have commits
> (then the complete buffer is flushed anyway, at least in Derby).
>
> So, in short, experiment with the following, using either the "insert
> into..." query or your client code modified to somehow store the totals in
> memory:
>  a) Log buffer size
>  b) Page cache size (and JVM heap)
>  c) Page size
>
> One of my attempts looked like this:
> java -Xmx512M -Dderby.storage.pageSize=32768
> -Dderby.storage.logBufferSize=524288 -Dderby.storage.pageCacheSize=2500 -cp
> .:${JDB10413} derbytest.FatTest


this show 9,5 sec time. Best is Oracle with 1,5 sec.
also is this parameters work with old database or you need to recreate
database to get this parameters work?

>
>
> Using your original test code I haven't been able to get lower than around
> 5 seconds (best), the average being somewhere around 6 seconds.

i could get 9,5 sec best. what another improvements can you do with
application?

>
>
> As always, you have to do your own tests on your own system to see if it is
> good enough for your use :)

:)

>
> Often there are other things to consider besides performance, for instance
> installation and ease of use.

:) yes, but performance issue is very important. more important than
installation.
Because with Oracle analysis will take near 100 (algorithms) * 100 (data is
100 time more than in experiment) * 1,5 sec = 4.17 hours each. So near 6
data parts a day per server.

Java: analysis 100 * 100 * 9,5 = 26,4 hours each. So less than ONE data part
a day per server. So company need to buy and use 6 time more servers than it
use now. Also disadvantages is Java don't have integrate SQL into language,
so SQL can be validated at runtime, not compile time (unlike PL/SQL).

>
> Does anyone have any ideas on other possible tunings?
>
> --
> Kristian
>
Mime
View raw message