db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Wilson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-4620) Query optimizer causes OOM error on a complex query
Date Mon, 19 Apr 2010 15:06:54 GMT

    [ https://issues.apache.org/jira/browse/DERBY-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12858543#action_12858543
] 

Chris Wilson commented on DERBY-4620:
-------------------------------------

I don't think it's the wrong dump, I just copied it from my Subversion checkout directory,
bunzipped it, and opened it in Eclipse. The file I opened was:

$ md5sum /tmp/java_pid32345.hprof 
a493898efa0834726b163f126422bf02  /tmp/java_pid32345.hprof

I see the same counts for some objects as in your histogram, e.g.

Class Name                            | Objects | Shallow Heap | Retained Heap 

------------------------------------------------------------------------------

org.apache.derby.iapi.types.SQLVarchar| 233,173 |    9,326,920 | >= 13,057,688 

org.apache.derby.iapi.types.DataValueDescriptor[]| 100,376 |    7,641,528 | >= 48,864,896


char[][]  | 365,401 |    5,847,240 |  >= 5,850,664 

--------------------------------------------------


I don't see all the classes that you see, and I see some that you don't. Could it be due to
a filter in your Eclipse?

Re: What was the max heap size set to when you got this OOME? it was the default for Sun Java
1.6.0_17, which behaves the same way if it's set to -Xmx64m, although I can't promise you
that it was actually that value as I don't normally specify it.

Re: What was the Derby page cache size set to? (looks like maybe the default of 1000?) I haven't
changed it from the default.

Re: If this is a JVM bug, it will be hard to - and maybe not predictable - to make a work-around
for it in Derby. I understand that it would be difficult to regression-test it without bundling
a separate copy of Java along with the test.

However once we understand the bug we can probably write a test case that would use ridiculously
large amounts of memory, e.g. joining a table to itself thousands of times in a way that forces
a hash join. This should cause a very quick failure if the JVM problem reoccurs, or if something
else triggers an increase in the memory used by such joins.

> Query optimizer causes OOM error on a complex query
> ---------------------------------------------------
>
>                 Key: DERBY-4620
>                 URL: https://issues.apache.org/jira/browse/DERBY-4620
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.5.3.0
>         Environment: java version "1.6.0_17"
> Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
> Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing)
> Linux rocio.xxx 2.6.24-27-generic #1 SMP Fri Mar 12 01:10:31 UTC 2010 i686 GNU/Linux
>            Reporter: Chris Wilson
>         Attachments: query-plan.txt
>
>
> I have a query that generates about 2,000 rows in a summary table. It runs fast enough
(~30 seconds) on MySQL. The same query on Derby runs for about 10 minutes and then fails with
an OutOfMemoryError.
> I have created a test case to reproduce the problem. It's not minimal because it relies
on a rather large dataset to reproduce it, and it's not trivial, but I don't mind doing a
bit of work trimming it if someone can point me in the necessary direction.
> You can check out the test case from our Subversion server here:
>   http://rita.wfplogistics.org/svn/trackrita/rita/doc/derby-oom-slow-query
> which includes a pre-built Derby database in "testdb.derby". If this database is deleted,
"test.sh" will recreate it, but that takes about 10-15 minutes.
> Just modify the script "test.sh" to point to your Derby libraries, and run it (or just
execute the commands in "movement_complete.sql") to demonstrate the problem. You can view
the source of that file online here:
>   http://rita.wfplogistics.org/trac/browser/rita/conf/movement_complete.sql
> The first "insert into movement_complete" (starting around line 4) takes about 15 seconds
to complete and inserts 5890 rows. The second, starting around line 54, does not complete
in reasonable time on Derby. On MySQL, it runs in 28 seconds and inserts 2038 rows. On Derby,
after 10 minutes I get:
> JAVA ERROR: java.lang.OutOfMemoryError: Java heap space
> ij> ERROR X0Y67: Cannot issue rollback in a nested connection when there 
> is a pending operation in the parent connection.
> (process exits)
> It does not output the query plan in this case.
> Following the suggestion of Bryan Pendleton, I tried increasing the JVM memory limit
from the default to 1024m, and this allows the query to finish executing quite quickly. I
guess that means that the optimiser is just taking a lot of memory to 
> optimise the query, and it spends forever in GC before finally hitting OOM and giving
up when using the default settings.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message