db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "A B (JIRA)" <derby-...@db.apache.org>
Subject [jira] Updated: (DERBY-1007) Optimizer can return incorrect "best cost" estimates with nested subqueries, which leads to generation of sub-optimal plans.
Date Thu, 27 Apr 2006 06:26:15 GMT
     [ http://issues.apache.org/jira/browse/DERBY-1007?page=all ]

A B updated DERBY-1007:
-----------------------

    Attachment: d1007_followup_v1.patch

In a word, the fix for this issue ensures that, in the case of subqueries, the optimizer will
correctly propagate the estimated costs for subqueries up to the parent subquery(-ies), thus
allowing the parent query to make a better decision about which join order is ultimately the
best.  As seen in the example scenario included above, the correct estimates are higher--sometimes
much higher--than what the optimizer was returning prior to this change: in the example, the
optimizer was returning an incorrect cost estimate of 10783  before the patch, and a correct
estimate of 1 million after the patch (where "correct" means that it's the value calculated
by the optimizer and thus the value that should be returned; I'm not saying anything about
the accuracy of the estimate here).

One side effect of this is that, for very deeply nested queries and/or queries with a high
number of FROM tables/expressions, the higher cost estimates can be multiplied--sometimes
many times over--throughout the optimization process, which means that the overall query estimate
can climb to a much larger number much more quickly.  If the query is big enough, this can
actually cause the optimizer to reach an estimated cost of INFINITY.

That said, the current optimizer logic for choosing a plan does not expect to see an estimate
of infinity for its plans.  As a result the optimizer does comparisons of, and arithmetic
with, cost estimates and row counts that, when applied to Infinity, give unexpected results.

I have filed DERBY-1259 and DERBY-1260 to address the "infinity problem" in more detail, but
am attaching here a follow-up patch that takes some basic steps toward making the optimizer
more robust in the face of infinite cost estimates, which are now more likely to occur given
the DERBY-1007 changes.  In particular, the d1007_followup_v1.patch does the following:

1) Fixes a couple of small problems with the handling of estimates for FromBaseTables, to
ensure that a FromBaseTable's estimate is correctly propagated to (and handled by) the ProjectRestrictNode
that sits above it.  This parallels the original DERBY-1007 work but is a much simpler "follow-up"
task as it deals only with base tables instead of subqueries, and thus the changes are fairly
minor.

2) There are several places in OptimizerImpl where the optimizer will only choose to accept
a plan's cost if the cost is less than the current "bestCost".  If no best cost has been found
yet, bestCost is set to an uninitialized value of Double.MAX_VALUE with the assumption that
the first valid plan will have a cost less than Double.MAX_VALUE and thus will be chosen as
the best so far.  However, since a plan's cost estimate can actually end up being Double.POSITIVE_INFINITY,
which is greater than Double.MAX_VALUE, it's possible that the optimizer will reject a valid
join order because its cost is infinity, and then end up completing without ever finding a
valid plan--which is wrong.  What we want is for the optimizer to accept the first valid plan
that it finds, regardless of what the cost is.  Then if it later finds a better plan, it can
use that.  So in several places the d1007_followup_v1.patch adds a check to see if bestCost
is uninitialized and, if so, we'll always accept the first valid join order we find, regardless
of what its cost is--even if it's infinity--because that's better than no plan at all.

3) Modifies the "compare" method in CostEstimateImpl.java to try to account for comparisons
between two plans that both have infinite costs.  If this happens, we don't have much choice
but to guess as to which plan is actually better.  So the changes for followup_v1 make that
guess based on a comparison of row counts for the two plans.  And if the row counts themselves
are infinity, then we'll guess based on the single scan row counts.  And finally, if those
values are both infinity, as well, then we're out of luck and we just say that the two costs
are "equal" for lack of better alternative.

4) And finally, due to unexpected behavior that results from arithmetic using infinity (see
DERBY-1259), it is currently possible (though rather rare) for the optimizer to decide to
do a hash join that has a cost estimate of Infinity.  An example of a query for which this
could happen can be found in DERBY-1205, query #1.  That said, the BackingStoreHashtable that
is used for carrying out a hash join currently creates a Java Hashtable instance with a capacity
that matches the optimizer's estimated row count.  So if the row count is infinity we'll try
to create a Hashtable with some impossibly large capacity and, as a result, we'll end up with
an OutOfMemory error.  So the d1007_followup_v1.patch adds some code to handle this kind of
situation in a more graceful manner.

I ran derbyall with these changes on Linux Red Hat using ibm142 and saw no new failures.

So if anyone has time to review/commit, I'd appreciate it.

Thanks.

> Optimizer can return incorrect "best cost" estimates with nested subqueries, which leads
to generation of sub-optimal plans.
> ----------------------------------------------------------------------------------------------------------------------------
>
>          Key: DERBY-1007
>          URL: http://issues.apache.org/jira/browse/DERBY-1007
>      Project: Derby
>         Type: Bug

>   Components: Performance
>     Versions: 10.2.0.0
>     Reporter: A B
>     Assignee: A B
>     Priority: Minor
>  Attachments: d1007_followup_v1.patch, d1007_v1.patch, d1007_v1.stat
>
> When optimizing a query that has nested subqueries in it, it's possible that the optimizer
for the subqueries will return cost estimates that are lower than what they were actually
calculated to be.  The result is that the outer query can pick an access plan that is sub-optimal.
> Filing this jira issue based on the thread "[OPTIMIZER] OptimizerImpl "best plans" for
subqueries?" from derby-dev.  Description that follows is pasted from that email:
> http://article.gmane.org/gmane.comp.apache.db.derby.devel/14836
> Following example of what I saw when tracing through the code demonstrates the problem.
> select x1.j, x2.b from
>   (select distinct i,j from t1) x1,
>   (select distinct a,b from t3) x2
> where x1.i = x2.a;
> During optimization of this query we will create three instancesof OptimizerImpl:
>    OI_0: For "select x1.j, x2.b from x1, x2 where x1.i = x2.a"
>    OI_1: For "select distinct i,j from t1"
>    OI_2: For "select distinct a,b from t3"
> Query ran against a clean codeline when T1 had 1 row and T3 had 50,000.
>    -- Top-level call is made to the optimize() method of the
>      outermost SelectNode, which creates OI_0.
>    -- OI_0: picks join order {X1, X2} and calls X1.optimizeIt()
>    -- X1: *creates* OI_1 and makes calls to optimize it.
>    -- OI_1: picks join order {T1} and calls T1.optimizeIt()
>    -- T1: returns a cost of 20.
>    -- OI_1: saves 20 as new best cost and tells T1 to save it.
>    -- X1: calls OI_1.getOptimizedCost(), which returns 20.  X1
>      then returns 20 to OI_0.
>    -- OI_0: calls X2.optimizeIt()
>    -- X2: *creates* OI_2 and makes calls to optimize it.
>    -- OI_2: picks join order {T3} and calls T3.optimizeIt()
>    -- T3: returns a cost of 64700.
>    -- OI_2: saves 64700 as new best cost and tells T3 to save it.
>    -- X2: calls OI_2.getOptimizedCost(), which returns 64700. X2
>      then returns 64700 to OI_0.
>    -- OI_0: saves 20 + 64700 = 64720 as new best cost and tells
>      X1 to save 20 and X2 to save 64700.
>    -- OI_0: picks join order {X2, X1} and calls X2.optimizeIt()
>    -- X2: *fetches* OI_2 and makes calls to optimize it.
>    -- OI_2: picks join order {T3} and calls T3.optimizeIt()
>    -- T3: returns a cost of 10783.
>    -- OI_2: saves 10783 as new best cost and tells T3 to save it.
>    -- X2: calls OI_2.getOptimizedCost(), which returns 10783.  X2
>      then returns 10783 to OI_0.
>    -- OI_0: calls X1.optimizeIt()
>    -- X1: *fetches* OI_1 and makes calls to optimize it.
>    -- OI_1: picks join order {T1} and calls T1.optimizeIt()
>    -- T1: returns a cost of *1 MILLION!*.
>    -- OI_1: rejects new cost (1 mil > 20) and does nothing.
>    -- X1: calls OI_1.getOptimizedCost(), which returns *20*.  X1
>      then returns 20 to OI_0...this seems WRONG!
>    -- OI_0: saves 10783 + 20 = 10803 as new best cost and tells
>      X2 to save 10783 and X1 to save 20.
> So in the end, the outer-most OptimizerImpl chooses join order {X2, X1} because it thought
the cost of this join order was only 10783, which is better than  64720.  However, the _actual_
cost of the join order was really estimated at 1 million--so the outer OptimizerImpl chose
(and will generate) a plan that, according to the estimates, was (hugely) sub-optimal.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message