db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Possible problem in org.apache.derby.impl.store.access.BackingStoreHashTableFromScan
Date Fri, 16 Mar 2007 18:39:01 GMT
I thought that going to disk was more of an edge/error case.  That the
optimizer would try not to choose plans where hash scan result sets
overflowed to disk.  The query in this test seems pretty straight
forward - anyone have an idea what went wrong here - I assume an
estimate was wrong somewhere.

select * from table1, table2 where table1.tableID = table2.tableID

table1 and table2 have the same schema and data.

each has 25 columns, 500 rows, looking like

create table table1  (tableID int primary key, column2 varchar(50), 
column3 varchar(50), ..., column24 varchar(50))

I think data is 1 through 500 for primary key, and full 50 char's per 
varchar column, probably something in the range of 1300 byte rows.


Jeffrey Clary wrote:
> Folks,
> 
>  
> 
> I’m new to Derby and to these lists, so I’m not sure what I am reporting 
> is a bug or expected behavior.  You can see an earlier question I asked 
> on the derby-user list 3/15/2007 titled “Heap container closed exception 
> (2 statements on same connection).”
> 
>  
> 
> I am not seeing the behavior I would expect after calling 
> Connection.setHoldability(ResultSet. HOLD_CURSORS_OVER_COMMIT).   I have 
> attached a test program that displays the behavior.  Here is an outline 
> of what happens (with autocommit on):
> 
>  
> 
> 1.       Execute a statement that returns a fairly large result set.
> 
> 2.       Execute another statement on the same connection that logically 
> does not affect the first result set, but that does update the database.
> 
> 3.       Iterate through the first result set.
> 
> 4.       After some number of calls to next(), take an exception 
> indicating “heap container closed.”
> 
>  
> 
> I have looked a bit into the Derby source code, and I think that the 
> issue is related to the 
> org.apache.derby.impl.store.access.BackingStoreHashTableFromScan 
> constructor.  It passes a constant false value to its super in the 
> keepAfterCommit argument.  In fact, there is a comment there that says 
> “Do not keep the hash table after a commit.”  It seems to me that this 
> value should be based on the holdability attribute of the statement, as 
> set in the connection or when the statement is created.  But knowing so 
> little about the Derby implementation I don’t have any idea whether that 
> would trigger some unintended consequence.
> 
>  
> 
> Any advice would be appreciated.
> 
>  
> 
> Thanks,
> 
> Jeff Clary
> 


Mime
View raw message