db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dag H. Wanvik (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DERBY-6111) OutOfMemoryError using CREATE INDEX on large tables
Date Thu, 21 Mar 2013 07:27:17 GMT

    [ https://issues.apache.org/jira/browse/DERBY-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13608718#comment-13608718

Dag H. Wanvik commented on DERBY-6111:

Increasing the max heap to 24M [1], I see another error:

java.lang.OutOfMemoryError: GC overhead limit exceeded

Java(TM) SE Runtime Environment (build 1.7.0_17-b02)
Java HotSpot(TM) 64-Bit Server VM (build 23.7-b01, mixed mode)
> OutOfMemoryError using CREATE INDEX on large tables
> ---------------------------------------------------
>                 Key: DERBY-6111
>                 URL: https://issues.apache.org/jira/browse/DERBY-6111
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions:,
>         Environment: Windows 7, different JREs (1.4.2_09, 1.5.0_11, 7)
>            Reporter: Johannes Stadler
>            Priority: Critical
>              Labels: CREATE, INDEX, OutOfMemory, OutOfMemoryError
>         Attachments: createIndexOOM.zip, java_pid3236.zip
> I'm experiencing OutOfMemoryErorrs when performing a simple CREATE INDEX command on tables
with more than 500,000 rows. 
> The crashes occured not deterministically in our standard environment using 64MByte heap
space. But you can easily reproduce the error using the the repro database attached, when
running it with 12MByte heap space.
> Just start ij with the -Xmx12M JVM argument, connect to the sample db and execute
> I've done some investigation and i was able to track down the error. It occurs in SortBuffer.insert(),
but not as expected in NodeAllocator.newNode() (there is a handler for the OOE), but already
in the call of sortObserver.insertDuplicateKey() or .insertNonDuplicateKey() (where the data
value descriptors are cloned).
> Unfortunately this is not the point to fix it. As i caused the MergeRun (that spills
the buffer to disk) to happen earlier, it did not significantly lower the memory consumption.
Instead it created about 13,000 temp files with only 1KByte size (because of the many files,
performance was inacceptable).
> So i analyzed the heap (using the HeapDumpOnOutOfMemory option) and saw that it's not
the sortbuffer that consumes most of the memory (just few KBytes and about 6% of the memory),
but the ConcurrentCache. Even though the maxSize of the ConcurrentCache was set 1000, the
cache contained about 2,500 elements. I've also attached the heapdump.
> If i'm understanding the concept right, the cache elements are added without regarding
the maxSize and there's a worker thread that runs on low prio, that shrinks the cache from
time to time to 10% of its size.
> I think in this particular case, where memory is getting low, it would be a better idea
to have the cache cleared synchronously and provide more space to the sortBuffer. Maybe that
could be done in the ClockPolicy.insertEntry() in case, that the current size is increasing
the max size by 50%. I'm not already very familiar with the code, so i failed to do so.
> I hope you got all the information you need, if you require any further information,
please let me know.
> Greetings
> Johannes Stadler

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message