db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "RPost" <rp0...@pacbell.net>
Subject Re: [jira] Commented: (DERBY-106) HashJoinStrategy leads to java.lang.OutOfMemoryError
Date Tue, 21 Dec 2004 03:20:28 GMT
I'm inclined to think that there is still a problem lurking here also.

I'm still trying to work through which JoinStrategy is actually being used,
NestedLoopJoinStrategy or HashJoinStrategy. It appears that the
HashJoinStrategy is being used.

Since the default maxMemoryPerTable setting is 1MB per table does this mean
the default setting is being applied to decide NOT to use the
NestedLoopJoinStrategy? I'm trying to follow the logic of this code and so
far it appears that the join strategies are evaluated in the order that they
appear in the JoinStrategy array and that the first strategy accepted is the
one used.

This would appear to indicate that the optimizer is rejecting a
NestedLoopJoinStrategy because the table is too big (i.e. larger than the
1MB default). Since the table is larger than 1MB and removing the
HashJoinStrategy option allows the query to run then the NestedLoopJoin is
being successful and the optimizer is guessing wrong.

We still need to work out whether the property setting is being used
properly by the optimizer. Intuitively, 1MB does not sound like much memory
to me but I don't have any metrics to know how much memory some of these
queries typically use in Derby.

 /**
  Property name for controlling the maximum size of memory (in KB)
  the optimizer can use for each table.  If an access path takes
  memory larger than that size for a table, the access path is skipped.
  Default is 1024 (KB).
  */
String MAX_MEMORY_PER_TABLE = "derby.language.maxMemoryPerTable";

----- Original Message ----- 
From: "Suresh Thalamati" <tsuresh@Source-Zone.org>
To: "Derby Development" <derby-dev@db.apache.org>
Sent: Monday, December 20, 2004 6:20 PM
Subject: Re: [jira] Commented: (DERBY-106) HashJoinStrategy leads to
java.lang.OutOfMemoryError


> Gerald Khin (JIRA) wrote:
>
> >     [
http://nagoya.apache.org/jira/browse/DERBY-106?page=comments#action_56877 ]
> >
> >Gerald Khin commented on DERBY-106:
> >-----------------------------------
> >
> >The system property derby.language.maxMemoryPerTable is the system
property I asked for. Setting it to 0 works like a charm and turns the hash
join strategy off. So I'm happy and the bug can be closed. Perhaps this
system property should be mentioned somewhere in the derby tuning manual.
> >
> >
> >
>
> I don't think this bug should be closed.  Most probably  out of memory
> error is coming because the whole hash table is stored  in memory;
> current implementation of
> Derby hash table does not have logic to  spill the hash table  entries
> to disk when lot of memory is required.  Although using the
> maxMemoryPerTableflag is good work around.,  it would be good
> to fix the optimizer to NOT  choose hash-table join when memory
> requirements can not be estimated accurately.
>
> -suresh.
>
>


Mime
View raw message