db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Van Couvering <David.Vancouver...@Sun.COM>
Subject Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)
Date Fri, 14 Jul 2006 21:43:42 GMT
This sounds like a reasonable short-term compromise for me.  If we can 
figure out how to make DRDA support the more "flexible" application, 
that would be great, too, but I recognize that's a lot of effort and if 
we can put it on the back burner and make some progress, that would be 
great.

David

Andreas Korneliussen wrote:
> Bryan Pendleton wrote:
>> David Van Couvering wrote:
>>> I guess what I was assuming was, if the application goes off and does 
>>> something else, we can notice that and either raise an exception 
>>> ("you're not done with that BLOB column yet") or flush the rest of 
>>> the BLOB data, since it's obvious they won't be getting back to it 
>>> (e.g. if they send another query or do ResultSet.next(), it's clear 
>>> they're done with the BLOB column). 
>>
>> Are you sure that's acceptable JDBC behavior? My (very old) copy of the
>> JDBC spec says things like:
>>
>>   The standard behavior for a Blob instance is to remain valid until the
>>   transaction in which it was created is either committed or rolled back.
>>
>> So if I do something like:
>>
>>   ResultSet rs = stmt.executeQuery("SELECT DATA FROM TABLE1");
>>   rs.first();
>>   Blob data = rs.getBlob("DATA");
>>   InputStream blobStream = blob.getBinaryStream();
>>
>> I think I am supposed to be allowed to access blobStream quite some 
>> time later,
>> even if I do other things on the connection in the meantime.
>>
> 
> There is a similar discussion in
> http://www.nabble.com/-jira--Updated%3A-%28DERBY-721%29-State-of-InputStream-retrieved-from-resultset-is-not-clean-%2C-if-there-exists-previous-InputStream-.-tf664829.html#a1805521

> 
> 
> I understand that to preserve a BLOB throughout the lifetime of the 
> transaction may be complicated.
> 
> I would suggest a simplification:
> * We could optimize on the case where the user handles one column in the 
> row at the time. If the user moves away from the row, or to another 
> column, we could flush the Blob into memory, otherwise, if there is a 
> nice user who handles one row / one column at the time, the user will 
> avoid outofmemory issues. If there is a user who makes a tablescan and 
> collects all the n Blob objects in memory, the user may risk OutOfMemory 
> problems.
> 
> Additionally, I think we should be conservative and check available 
> memory before allocating memory for BLOBs, due to all the side-effects 
> to the applications when the VM goes out of memory.
> 
> Andreas

Mime
View raw message