db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oystein Grovlen - Sun Norway <Oystein.Grov...@Sun.COM>
Subject Re: database chunking
Date Mon, 17 Dec 2007 10:12:16 GMT
yonestar wrote:
> Hi,
> I have a large table (1 million rows).  I want to write its contents to a
> file, in a specified csv format.  

Have you tried to use Derby's built-in stored procedure  for export, 

> If i try to get all the rows at once, i run out of heap space (of course). 
> i can break it into chunks, using Java's Query.setFirstResult() and
> Query.setMaxResults().  but, this is slow. judging from some benchmarking i
> did, it seems that all the records up to the first requested one are
> retrieved and ignored.  (i.e. setFirstResult(n) will retrieve all the
> records but simply toss the ones < n)  using this method is very slow over 1
> million rows.

I do not understand why you should necessarily run out of heap space. 
Derby does not need to store the entire table in memory in order to 
select all the rows.
> what i'm looking for is some way to "get the next 1000 rows", where the DB
> remembers the last position in the table-- i.e., it doesn't have to seek to
> the entry point each time.  how can i efficiently break the table into
> chunks?
> thanks!!


View raw message