db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeffrey Clary" <jcl...@actuate.com>
Subject RE: Possible problem in org.apache.derby.impl.store.access.BackingStoreHashTableFromScan
Date Fri, 16 Mar 2007 18:37:57 GMT
Thanks, Mike.  I'll look up how to add something to JIRA and put it in
there.

We've got change working locally where the creator of the
BackingStoreHashTableFromScan gets the holdability from the Activation
object.  I'm afraid of it though, because I don't even know enough to
consider issues like the derby temporary files you mention below.  

As far as a workaround goes, I don't think I'm interested in one that
depends on data size.  I can't predict how big result sets might be in
my application.  I am looking into whether we can get by without having
two statements active against the same connection, thus taking
holdability out of the picture entirely.


-----Original Message-----
From: Mike Matrigali [mailto:mikem_app@sbcglobal.net] 
Sent: Friday, March 16, 2007 12:45 PM
To: derby-dev@db.apache.org
Subject: Re: Possible problem in
org.apache.derby.impl.store.access.BackingStoreHashTableFromScan

this definitely looks like a bug, and I think you have the right 
analysis.  You should report a bug in JIRA with your findings and test 
case.

If you are interested in working on it, some things to consider:
1) would need to get the holdability info all the way down from 
execution into
    the call.  I think the interesting place is in
    java/engine/org/apache/derby/impl/sql/execute/HashScanResultSet.java
    I didn't see holdability right off in this file, maybe someone can
    add the right way to get this info in this class?

2) need to check if temporary files are going to work right in
    holdability case.  It looks like when actual backing to disk was
    added the holdability case was not considered.

Does anyone know if derby temporary files will work correctly if held
open past commit.  Off hand I don't remember the process where they
are cleaned up - is that currently keyed by commit?

3) Are you interested in a workaround?  If the hash got created in
    memory rather than disk then this would probably work.  I think
    there are some flags to force bigger in memory hash result sets.

Jeffrey Clary wrote:
> Folks,
> 
>  
> 
> I'm new to Derby and to these lists, so I'm not sure what I am
reporting 
> is a bug or expected behavior.  You can see an earlier question I
asked 
> on the derby-user list 3/15/2007 titled "Heap container closed
exception 
> (2 statements on same connection)."
> 
>  
> 
> I am not seeing the behavior I would expect after calling 
> Connection.setHoldability(ResultSet. HOLD_CURSORS_OVER_COMMIT).   I
have 
> attached a test program that displays the behavior.  Here is an
outline 
> of what happens (with autocommit on):
> 
>  
> 
> 1.       Execute a statement that returns a fairly large result set.
> 
> 2.       Execute another statement on the same connection that
logically 
> does not affect the first result set, but that does update the
database.
> 
> 3.       Iterate through the first result set.
> 
> 4.       After some number of calls to next(), take an exception 
> indicating "heap container closed."
> 
>  
> 
> I have looked a bit into the Derby source code, and I think that the 
> issue is related to the 
> org.apache.derby.impl.store.access.BackingStoreHashTableFromScan 
> constructor.  It passes a constant false value to its super in the 
> keepAfterCommit argument.  In fact, there is a comment there that says

> "Do not keep the hash table after a commit."  It seems to me that this

> value should be based on the holdability attribute of the statement,
as 
> set in the connection or when the statement is created.  But knowing
so 
> little about the Derby implementation I don't have any idea whether
that 
> would trigger some unintended consequence.
> 
>  
> 
> Any advice would be appreciated.
> 
>  
> 
> Thanks,
> 
> Jeff Clary
> 


Mime
View raw message