db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "A B (JIRA)" <derby-...@db.apache.org>
Subject [jira] Updated: (DERBY-1315) Statement optimization/compilation fails with OutOfMemoryException in largeCodeGen test with embedded and framework DerbyNetClient
Date Thu, 31 Aug 2006 18:51:28 GMT
     [ http://issues.apache.org/jira/browse/DERBY-1315?page=all ]

A B updated DERBY-1315:
-----------------------

    Attachment: d1315_v1.patch

Attaching a patch, d1315_v1.patch, to address this issue.  This patch adds a small amount
of logic to remove entries from an Optimizable's "best plan" HashMap when they are no longer
needed.  For more on when this is possible, see the discussion here:

  http://article.gmane.org/gmane.comp.apache.db.derby.devel/26051

I built a set of insane jars with this patch applied and then ran some of the tests related
to this issue:

  1. largeCodeGen.sql (with Xmx512M):

    This test now passes with the patch applied.  Or rather, the
    OutOfMemory error no longer occurs; instead, the query results
    in a java linkage error with message:

      'constant_pool(70288 > 65535): java.io.IOException'

    This is the same behavior that occurs with 10.1, and thus
    is no longer a regression.

  2. largeCodeGen.java (as attached to this issue, with Xmx512M):

    When run as-is with Sun jdk15, the logical operator tests fail
    with error 42ZA0 for 900 params; the IN tests fail with error
    42ZA0 with 99000 params; and the union tests fail with an OOM
    error for the 4700 union test case.  HOWEVER, this OOM appears
    to be caused by something other than the optimizer changes for
    Derby-805.  There are several reasons why I think (hope) this:

    a. Sun jdk15 throws an error "OOMError: PermGen space" (or
       something like that); when I did a Google for that, all of
       the pages I (very quickly) browsed indicated that this is
       frequently the result of missed opportunities for garbage
       collection.

    b. If I change largeCodeGen to start counting from 4500 unions
       (instead of 800), then the case of 4700, 4800, 4900, and
       5000 unions all pass.  I killed the test after that so I
       don't know how far it would have gone.

    c. If I run with ibm142 instead of Sun jdk15, the test succeeds
       for as many as 6200 unions without problem (though memory
       usage does creep up for every iteration, suggesting that it
       would have run out of memory at some point).  I killed the
       test after that so I don't know how far it would have gone.

    d. If I ONLY run the largeCodeGen test with 10000 unions (no
       other tests before it), then the query will fail with error
       42ZA0 instead of with an OOM.  I.e. this is the expected
       behavior (...isn't it?)

    One other interesting thing to note with this test: if I change
    it so that it ONLY runs with 7500 unions (no other tests before
    it), the query compiles successfully for both ibm142 and jdk15.
    Then, for ibm142, it fails AT EXECUTION TIME with an OOM error,
    while for Sun jdk15 it executes without error.  Again, this
    suggests a difference between JVMs more than a problem with Derby.
    In either case the fact that compilation succeeds shows that the
    regression reported for DERBY-1315 is fixed with d1315_v1.patch
    (at least for insane builds).  The OOM error for ibm142 is an
    error that we never would have  reached with 10.1 because 10.1
    fails during compilation (or more specifically, during code
    generation) and thus never makes it to execution.  So this is
    a case where functionality that didn't work in 10.1 still doesn't
    work in 10.2--at least not with ibm142--and thus this wouldn't be
    a regression.
 
    This fact is further backed by the observation that a test of
    10000 unions will fail with error 42ZA0 on ibm142 during
    compilation--which means we never get to execution and thus
    no OOM error occurs in that case.

  3. lang/largeCodeGen.java as a standalone JUnit test:

    I uncommented the lines in lang/largeCodeGen that test 10000
    unions and then ran the test as a standalone JUnit with the
    following command (against an INSANE build):

    java -Xmx512M junit.textui.TestRunner
      org.apache.derbyTesting.functionTests.test.lang.largeCodeGen

    The result was:

    ....
    Time: 593.543

    OK (4 tests) 

    In other words, it took a good chunk of time (10 minutes), but the
    test *did* pass with a max heap size of 512M.  Note that, of the
    10 minutes spent running the test, it looks like less than 20% of
    that time is spent in optimization; the rest of the time is
    (apparently?) used for generating code.

    NOTE: If I try to run the same test with a SANE build, it fails
    with an OOM error.  So this perhaps requires more investigation.
    But d1315_v1.patch at least makes things better.

I ran derbyall against sane jars with this patch on Red Hat Linux using ibm142 and there were
no failures.  So I think the patch itself can be committed (pending review).  There is still
more work, though, to figure out how to run largeCodeGen as part of derbyall: the test requires
up to 512M of memory and maxes out my CPU for 10 minutes before completing--is it reasonable
to add such a test derbyall?  Also, more investigation is required in order to figure out
why the test passes with insane jars but fails (due to OOM) with sane jars.  At the very least,
some kind of logic would be necessary to only run the test for insane builds, and I have no
idea how (or if) that logic would work in JUnit.

Comments/feedback would be appreciated....

> Statement optimization/compilation fails with OutOfMemoryException in largeCodeGen test
 with embedded and framework DerbyNetClient
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1315
>                 URL: http://issues.apache.org/jira/browse/DERBY-1315
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL, Regression Test Failure
>    Affects Versions: 10.2.1.0
>         Environment: Linux - Suse 2.6.5-7.252
>            Reporter: Ramandeep Kaur
>             Fix For: 10.2.1.0
>
>         Attachments: 1315-comparison.html, d1315_v1.patch, largeCodeGen.java, largeCodeGen.out,
largeCodeGen.sql, largeCodeGen.tmp, largeCodeGen.tmpmstr
>
>
> The test case lang/largeCodeGen.java was run as is without giving any -Djvmflags="-mx512M
-ms512M" and gave the following error:
> *** Start: largeCodeGen jdk1.4.2 largeDataTests:largeDataTests 2006-04-29 08:30:04 ***
> 27a28
> > JVMST109: Insufficient space in Javaheap to satisfy allocation request
> Test Failed.
> *** End:   largeCodeGen jdk1.4.2 largeDataTests:largeDataTests 2006-04-29 08:32:01 ***
>  
> Then the test case lang/largeCodeGen.java was run with -Djvmflags="-mx512M -ms512M",
and it gave the following error:
> < PASS: IN clause with 97000 parameters
> 20 del
> < PASS: PREPARE: IN clause with 98000 parameters
> 21 del
> < PASS: IN clause with 98000 parameters
> 22 del
> < FAILED QUERY: IN clause with 99000 parameters. 
> 22a19
> > FAILED QUERY: IN clause with 97000 parameters.
> Test Failed.
> Then I modified test case lang/largeCodeGen.java to set PRINT_FAILURE_EXCEPTION  to true
and ran the test again. This time I got the following error and stack trace:
> MasterFileName = master/largeCodeGen.out
> 15a16,18
> > java.sql.SQLException: Statement too complex. Try rewriting the query to remove

> complexity. Eliminating many duplicate expressions or breaking up the query and 
> storing interim results in a temporary table can often help resolve this error
> . SQLSTATE: XBCM4: Java class file format limit(s) exceeded: method:e1 code_leng
> th (65577 > 65535) in generated class org.apache.derby.exe.ac46a08075x010bx203ax 
> d010x000050a9065e9.
> > Caused by: org.apache.derby.client.am.SqlException: Statement too complex. Try
>  rewriting the query to remove complexity. Eliminating many duplicate expression
> s or breaking up the query and storing interim results in a temporary table can 
> often help resolve this error. SQLSTATE: XBCM4: Java class file format limit(s)
> exceeded: method:e1 code_length (65577 > 65535) in generated class 
> org.apache.derby.exe.ac46a08075x010bx203axd010x000050a9065e9 .
> >       ... 4 more
> 19 del
> < PASS: IN clause with 97000 parameters
> 20 del
> < PASS: PREPARE: IN clause with 98000 parameters
> 21 del
> < PASS: IN clause with 98000 parameters
> 22 del
> < FAILED QUERY: IN clause with 99000 parameters. 
> 22a22,29
> > FAILED QUERY: IN clause with 97000 parameters.
> > java.sql.SQLException: The parameter position '31,465' is out of range.  The 
> number of parameters for this prepared  statement is '31,464'.
> >       at org.apache.derby.client.am.PreparedStatement.setInt(PreparedStatement
> .java(Compiled Code))
> > Caused by: org.apache.derby.client.am.SqlException: The parameter position '31
> ,465' is out of range.  The number of parameters for this prepared  statement is 
>  '31,464'.
> >       at org.apache.derby.client.am.PreparedStatement.checkForValidParameterIn
> dex(PreparedStatement.java(Compiled Code))
> >       at org.apache.derby.client.am.PreparedStatement.checkSetterPreconditions 
> (PreparedStatement.java(Inlined Compiled Code))
> >       at org.apache.derby.client.am.PreparedStatement.setIntX(PreparedStatemen
> t.java(Inlined Compiled Code))
> >       ... 5 more
> 27a35,37
> > java.sql.SQLException : Statement too complex. Try rewriting the query to remove

> complexity. Eliminating many duplicate expressions or breaking up the query and 
> storing interim results in a temporary table can often help resolve this error 
> . SQLSTATE: XBCM4: Java class file format limit(s) exceeded: method:fillResultSe
> t code_length (69127 > 65535) in generated class 
> org.apache.derby.exe.ac46a08075x010bx203axd010x000050a9065e11.
> > Caused by: org.apache.derby.client.am.SqlException: Statement too complex. Try
>  rewriting the query to remove complexity. Eliminating many duplicate expression
> s or breaking up the query and storing interim results in a temporary table can 
> often help resolve this error. SQLSTATE: XBCM4: Java class file format limit(s)
> exceeded: method:fillResultSet code_length (69127 > 65535) in generated class 
> org.apache.derby.exe.ac46a08075x010bx203axd010x000050a9065e11 .
> >       ... 3 more
> 28 add
> > java.sql.SQLException: Java exception: ': java.lang.OutOfMemoryError'.
> > Caused by: org.apache.derby.client.am.SqlException: Java exception: ': 
> java.lang.OutOfMemoryError '.
> >       ... 3 more
> Test Failed.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message