db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jack Klebanoff <klebanoff-de...@sbcglobal.net>
Subject [PATCH] BackingStoreHashtable
Date Fri, 25 Feb 2005 17:44:35 GMT
I have attached a patch that causes BackingStoreHashtable to spill to 
disk when it gets large. BackingStoreHashtable is used to implement hash 
joins, DISTINCT, scroll insensitive cursors, and the HAVING clause. The 
unpatched BackingStoreHashtable never spills to disk. This causes Derby 
to sometimes run out of memory. See Jira report Derby-106.

One of the arguments of the BackingStoreHashtable constructor is the 
maximum number of rows to store in memory. If this argument is 
non-negative then the patched BackingStoreHashtable spills to disk when 
more than that number of rows are added to the hash table.

If the max_inmemory_rowcnt argument is negative then 
BackingStoreHashtable decides itself when to spill to disk. It does so 
when its estimate of the size of the hash table (in bytes) grows larger 
than 1% of the total memory when the BackingStoreHashtable was created. 
Currently Derby only constructs BackingStoreHashtables with 
max_inmemory_rowcnt = -1, so this mechanism is always used to decide 
when to spill.

The disk portion of the hash table is handled in class DiskHashtable. 
This does not implement a dynamic hash table data structure. Instead it 
is implemented using an idea of Mike Matrigali. The rows are stored in a 
standard heap conglomerate, also used by standard Derby tables. A Btree 
is used to index the rows by hash code. We cannot index them by their 
actual keys because our Btree implementation limits the length of a key. 
In order to find a record by key DiskHashtable scans the Btree for all 
rows with the same hash code and matches the retrieved rows with the 
target key.

The advantage of this method is that it requires little new code because 
it uses existing heap and btree conglomerate implementations. The 
disadvantage is that it is slower than a dynamic hash table structure. 
We expect that in most cases BackingStoreHashtable will not spill to 
disk, so this trade off seems appropriate.

Issues and Future Work
------------------------

The Derby community may want to consider some follow on work.

Hash join costing should be re-evaluated now that BackingStoreHashtable 
can spill to disk. The optimizer can consider using hash joins on larger 
tables, but it should take the cost of disk access into account. This 
leads into a larger issue of optimizer costing. Our cost numbers were 
derived many years ago by timing various operations on a particular 
machine. That machine is probably no longer available for timing 
BackingStoreHashtable, and it is probably obsolete now. We should 
probably revise all of our costing numbers on a more modern machine.

We may want to implement a real dynamic disk hash structure to improve 
the speed of BackingStoreHashtable when it has spilled to disk. If it 
were faster we could use hash joins more often, potentially improving 
the speed of some Derby joins. Furthermore BackingStoreHashtable is used 
elsewhere. Our assumption that BackingStoreHashtable will seldom spill 
to disk may not be correct.

In my implementation BackingStoreHashtable keeps old rows in memory 
after it decides to spill new rows to disk. Mike Matrigali suggested 
that it should move the old rows to disk to reduce memory usage. This 
comes at the price of slowing down both the time to populate a 
BackingStoreHashtable and the time to access an element. That is why I 
chose not to move old rows to disk.

Jack Klebanoff

Mime
View raw message