db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Suavi Ali Demir <dem...@yahoo.com>
Subject Re: Iterate over big table...
Date Mon, 12 Jun 2006 15:02:55 GMT
if it is a forward only cursor, when you do a select * and loop through all rows and doSomething()
with each row (and not hold a reference to the data yourself), there would be only 1 row's
data in memory at a given time, no? (or few rows if Derby prefetches in chunks as you do rs.next())
  

Dyre.Tjeldvoll@Sun.COM wrote:
  yves pielusenet writes:

> Le vendredi 19 mai 2006 à 13:13 -0700, Stanley Bradbury a écrit :
>> yves pielusenet wrote:
>> 
>> >Hello,
>> >I have such a table :
>> >CREATE TABLE data (
>> > numvign INTEGER PRIMARY KEY,
>> > data BLOB NOT NULL
>> >)
>> >
>> >This table should have millions of records. I have to iterate over all
>> >records.
>> >What is the best way (I don't want to have a outOfMemoryException) :
>> >
>> >for(int i=0; i>> > theData = select data from data where numvign=i;
>> > doSomething(theData);
>> >}
>> >
>> >or :
>> >
>> >alldata = select data from data order by numvign;
>> >while(alldata.hasNext()){
>> > doSomething(allData.next());
>> >}

Depends on what "doSomething" does. If doSomething could be expressed
as SQL or done from a stored procedure, you wouldn't have to
materialize the entire 1 mill rowset in the client. 

Instead of either getting all 1 mill rows into a single rs, or
performing 1 mill queries yielding a single-row rs, you could try to
read as many rows as you think is safe, and repeat until you are
done. But this doesn't work so well if there are "holes" in your pk
sequence...

-- 
dt



Mime
View raw message