db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: How Much Memory for hash joins
Date Thu, 28 Feb 2013 17:11:00 GMT
There are some good comments in
java/engine/org/apache/derby/iapi/store/access/BackingStoreHashTable.java which 
this class inherits from.

I am not sure what parameters are passed down usually to this class from
optimizer/execution.

I have always assumed that the intent of the option is per opened 
"table" in store.  This is not user friendly at all since user does not 
really know how this matches up to their query, which is likely
why the option was never made public for a zero admin db.

Internally one of these backing store things can be created for any
index or table access as part of query.  A single query with joins
could have an number of these depending on how many terms are in
the joins.


On 2/27/2013 12:47 PM, Katherine Marsden wrote:
> I was wondering what is the default maximum memory for hash joins.
>
> Looking at OptimizerFactoryImpl I see
> protected int maxMemoryPerTable = 1048576 unless overridden by
> "derby.language.maxMemoryPerTable";
> Is actually intended  per table or per active query?  I don't see the
> property in the documentation.
> If I set this to zero will it turn off hash joins all together?
>
>
> In BackingStoreHashTableFromScan I see.
>
>        this.max_inmemory_rowcnt = max_inmemory_rowcnt;
>          if( max_inmemory_rowcnt > 0)
>              max_inmemory_size = Long.MAX_VALUE;
>          else
>              max_inmemory_size = Runtime.getRuntime().totalMemory()/100;
>
> So what is the intent and actual behavior of
> "derby.language.maxMemoryPerTable" and its default and do they match?
> Are there other factors that go into setting the ceiling for memory
> usage for has joins.
>
> Thanks
>
> Kathey
>
> P.S.
> In actual practice on a *very* old derby version   Apache Derby -
> 10.1.2.1 - (330608) I am looking at a hprof dump which shows almost 2GB
> of Blob and clob objects that trace back to hash joins and a
> BAckingStoreHashTableFromScan objects that have values as below with
> max_inmemory_size as Long.MAX_VALUE as I would expect from the above code.
>
> e.g.
> instance of
> org.apache.derby.impl.store.access.BackingStoreHashTableFromScan@0xa59f7d08
> (63 bytes)
> Class:
> class org.apache.derby.impl.store.access.BackingStoreHashTableFromScan
> Instance data members:
> auxillary_runtimestats (L) : <null>
> diskHashtable (L) : <null>
> hash_table (L) : java.util.Hashtable@0xa59f7d48 (40 bytes)
> inmemory_rowcnt (J) : 8686
> keepAfterCommit (Z) : false
> key_column_numbers (L) : [I@0xa33b6110 (12 bytes)
> max_inmemory_rowcnt (J) : 58254
> max_inmemory_size (J) : 9223372036854775807
> open_scan (L) :
> org.apache.derby.impl.store.access.heap.HeapScan@0xa59f7f10 (64 bytes)
> remove_duplicates (Z) : false
> row_source (L) : <null>
> skipNullKeyColumns (Z) : true
> tc (L) : org.apache.derby.impl.store.access.RAMTransaction@0x8f960428
> (57 bytes)
> References to this object:
> org.apache.derby.impl.sql.execute.HashScanResultSet@0xa33b5fc8 (321
> bytes) : field hashtable
> Other Queries
>


Mime
View raw message