db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Performance tests for very large data sets?
Date Wed, 14 Jun 2006 18:50:36 GMT
sounds like a pretty simple test program, probably the best thing to
do is post your exact ddl and a simple test program showing the
problem.  This will answer all the usual questions:
ddl, ie. what indexes do you have
server config if any
inserts per xact config
log config
is system cpu vs i/o bound at the plummet

What does "plumets" mean?

I would expect per insert time to be relatively stable until the
cache fills and then the whole system probably becomes i/o bound.
When this happens depends on size of data, actual disk hardware,
log io config, ...

Merlin Beedell at Demon wrote:
> I hope that this is a common enough request that some results and 
> comments already exist. Just point me in the right direction!
> I would like to know how Derby performs (inserts and search) where the 
> record count goes from 1 to 1,000 million or greater, particularly if 
> there are foreign key constraints included.  And in comparison with 
> other databases.
> I fully recognise that such tests are quite subjective and vary greatly 
> depending on various settings that do not change the sql used.
> We have a situation where Derby performs well up to 100,000 records or 
> so, then (insert) performance plummets. It uses a fairly large text 
> foreign key, and we are changing this to an auto-number key instead, 
> which we know will help.  The aim to to have good performance up to 6Gb 
> worth of data (approx 60-120 million rows over a small number of tables).
> Some experience from other developers may well help us move forward, as 
> well as any reasoned test results that shows that Derby is a good choice 
> for us.

View raw message