db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Brown (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-3009) Out of memory error when creating a very large table
Date Wed, 14 Nov 2007 22:11:43 GMT

    [ https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12542608
] 

Andrew Brown commented on DERBY-3009:
-------------------------------------

I have run accross this issue also and I narrowed it down to index building.  I have a few
tables with 10-30 million records and when building indexes on them I can watch the memory
used grow until it crashes.  The only way around this for me has been to restart Derby after
each index is built (not really a good thing is a production environment).  This happens both
in IJ and through a java application.  We changed the  java code to commit and close the connection
after each index build and that seemed to help, but the problem would still manifest itself.

I played around with some of the memory settings and by setting derby.storage.pageSize to
a bigger size than the default size, this just caused the crash to happen faster.  I am not
a Java developer but it seems that once an index is built, the buffer still has a lock on
the memory and it isn't being freed.

> Out of memory error when creating a very large table
> ----------------------------------------------------
>
>                 Key: DERBY-3009
>                 URL: https://issues.apache.org/jira/browse/DERBY-3009
>             Project: Derby
>          Issue Type: Bug
>    Affects Versions: 10.2.2.0
>         Environment: Win XP Pro
>            Reporter: Nick Williamson
>         Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), IJ crashes
with an out of memory error. The table can be created successfully if it is done in stages,
each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 256 MB,
but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran with the
network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a bug in Derby.
It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message