db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yonestar <y...@higtek.com>
Subject database chunking
Date Fri, 14 Dec 2007 20:10:17 GMT

Hi,

I have a large table (1 million rows).  I want to write its contents to a
file, in a specified csv format.  

If i try to get all the rows at once, i run out of heap space (of course). 
i can break it into chunks, using Java's Query.setFirstResult() and
Query.setMaxResults().  but, this is slow. judging from some benchmarking i
did, it seems that all the records up to the first requested one are
retrieved and ignored.  (i.e. setFirstResult(n) will retrieve all the
records but simply toss the ones < n)  using this method is very slow over 1
million rows.

what i'm looking for is some way to "get the next 1000 rows", where the DB
remembers the last position in the table-- i.e., it doesn't have to seek to
the entry point each time.  how can i efficiently break the table into
chunks?

thanks!!
-- 
View this message in context: http://www.nabble.com/database-chunking-tp14339770p14339770.html
Sent from the Apache Derby Developers mailing list archive at Nabble.com.


Mime
View raw message