db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Ondruška <peter.ondru...@gmail.com>
Subject Re: OutOfMemoryErrors on group by select
Date Mon, 08 Feb 2010 14:37:59 GMT
Try creating a non-unique index on the column you group by. Have you
tried more recent Derby version if you can reproduce that there?

On Mon, Feb 8, 2010 at 3:25 PM, Ronald Rudy <ronchalant@gmail.com> wrote:
> Version is 10.4.2.0, and no I don't - I am actually trying the group by/etc. on another
column that is indexed to see if there's any memory benefits..  The table isn't optimized
for the below statement because in production it will never be executed like that, I'm just
trying to monitor some counts while the app is running..
>
>
> On Feb 8, 2010, at 9:21:29 AM, Peter Ondruška wrote:
>
>> What Derby version is this? Do you have an index on groupCol?
>>
>> On Mon, Feb 8, 2010 at 3:11 PM, Ronald Rudy <ronchalant@gmail.com> wrote:
>>> I've gotten to the point where I seem to be able to frequently induce OutOfMemoryErrors
when executing a statement like this one:
>>>
>>> SELECT groupCol, count(*) FROM myTable GROUP BY groupCol;
>>>
>>> The total number of records doesn't seem terribly large (about 400,000 records)
though while I'm executing the above statement there are likely some inserts and deletes happening
concurrently.  I don't need anything transactional here, I'm really just monitoring a table
from a separate network connection.
>>>
>>> In our app the database runs in its own JSVC daemon as a NetworkServer with 512MB
allocated to it.  Page cache size is set to 2000.    Page size should be Derby default
(4096 I think).
>>>
>>> I cannot be 100% sure that the above is specifically what is causing the error,
but I can say that I haven't seen memory errors until I started monitoring this table frequently
and even then I really only saw it when the table size started getting a bit bigger (not big
by any stretch, but > 300k rows).
>>>
>>> The table itself is not really big - there are 7 varchar columns along with an
ID column.
>>>
>>> In this production-level situation stability is more important than performance.
 I'd like Derby configured so that no matter what is requested SQL wise the daemon itself
is not impacted.
>>>
>>> I'd also like to be able to build in some handling code perhaps that if/when
an OutOfMemoryError is encountered, it will automatically restart - for this I might need
to add a script with -XX:OnOutOfMemoryError as a JVM option unless there's some way to handle
this internally?
>>>
>>> -Ron
>
>

Mime
View raw message