db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "A B (JIRA)" <derby-...@db.apache.org>
Subject [jira] Commented: (DERBY-1902) Intermittent failures in predicatePushdown.sql
Date Thu, 05 Oct 2006 15:25:20 GMT
    [ http://issues.apache.org/jira/browse/DERBY-1902?page=comments#action_12440163 ] 
            
A B commented on DERBY-1902:
----------------------------

>  I turned the disk cache back on the Solaris x86 box, and the test still fails.

Thanks for trying that out and for reporting the results, Øystein.  I took a look at the
diffs and it basically comes down to 4 queries where we expect the optimizer to choose a hash
join but it chooses a nested loop join instead.  This is confusing to me since the derby.optimizer.noTimeout=true
property is set, which theoretically means that the optimizer should be choosing the same
plan every time.

Unfortunately, I don't have much more to offer on this right now.  I do know that during optimization
a hash join can be skipped/rejected if the optimizer thinks that the resultant in-memory hash
table would be too big.  The notion of "too big" can be set by using the maxMemoryPerTable
property, but since that isn't specified for the test, it shouldn't make a difference in this
case.

I *did* notice that, when trying to determine of the hash table is too large, the computation
depends on the value returned from HashJoinStrategy.maxCapacity(), which makes a call to ClassSize.estimateHashEntrySize(),
which in turn uses a variable "refSize" that is set based on total and free memory available
to the Java runtime:

        // Figure out whether this is a 32 or 64 bit machine.
        Runtime runtime = Runtime.getRuntime();
        long memBase = runtime.totalMemory() - runtime.freeMemory();
        Object[] junk = new Object[10000];
        long memUsed = runtime.totalMemory() - runtime.freeMemory() - memBase;
        int sz = (int)((memUsed + junk.length/2)/junk.length);
        refSize = ( 4 > sz) ? 4 : sz;
        minObjectSize = 4*refSize;

It's quite a long shot, but maybe that has something to do with the different results on different
machines...?  Do the machines have the same amount of memory available to them when the test
is run?

I admit this is a pretty far-fetched guess, but that's all I can think of at the moment...

> Intermittent failures in predicatePushdown.sql
> ----------------------------------------------
>
>                 Key: DERBY-1902
>                 URL: http://issues.apache.org/jira/browse/DERBY-1902
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL, Test, Regression Test Failure
>    Affects Versions: 10.3.0.0
>         Environment: Seen on both Solaris 10 and Linux on 2-CPU Opteron boxes, disk cache
off
>            Reporter: Øystein Grøvlen
>             Fix For: 10.3.0.0
>
>         Attachments: derbylang.zip
>
>
> For the last week, there have been intermittent failures in the night test in lang/predicatePushdown.sql.
 There is a plan diff which starts as follows:
> ********* Diff file derbyall/derbylang/predicatePushdown.diff
> *** Start: predicatePushdown jdk1.5.0_07 derbyall:derbylang 2006-09-29 00:39:36 ***
> 4593 del
> < 			Hash Join ResultSet:
> 4593a4593
> > 			Nested Loop Join ResultSet:
> I did not find any changes that seem relevant before the first failing night test.
> This test has not failed in the tinderbox test which runs on a computer with the disk
cache on.  For both computers where the failure is seen, the disk cache has been turned off.
 Hence, it may be that another plan is picked because of slower I/O.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

Mime
View raw message