That is a possibility.  I could select out the oldest date add an hour to that and grab packets in that range.

Suavi Ali Demir wrote:

If 100 is not absolute necessity, you can give datetime ranges. For example select 1 hour of logs at a time.

Ali



--- On Tue, 5/13/08, Matt Chambers <chambers@imageworks.com> wrote:

From: Matt Chambers <chambers@imageworks.com>
Subject: top N reporting with derby
To: derby-user@db.apache.org
Date: Tuesday, May 13, 2008, 6:11 PM

Hi guys, new to the list.   I'm a new Derby user, just grabbed the 
latest version.

I have a Tomcat application that sends some statistics to a server for 
processing.  The statistics are not mission critical or anything, they 
are just nice to have, so if the statistics server goes down, no big 
deal.  Its actually never gone down but in case it does, I would like to 
spool the updates into a Derby DB.  So I changed the code around a bit 
to always store and forward instead of sending directly to the stats 
server using Derby but I'm stuck.

I have two requirements.
1. that if the stats server is down for days that I should be able to 
process the spool in small chunks over time when it comes back up.
2. each row has to be processed in the order in which i was placed in 
the table or else the stats server could drop them anyway.

Normally, I would do a SELECT * FROM spool_table ORDER BY date_inserted 
ASC LIMIT 100 to grab the oldest entries but Derby can't do limit and 
you can't user ORDER BY in sub selects.  So how do I get my data out of 
the table in small chunks in the same order it went in?

Any ideas?

-Matt



--
-Matt

Cue3 Development Progress Middle-Tier Homepage