db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Van Couvering <David.Vancouver...@Sun.COM>
Subject Re: [jira] Commented: (DERBY-550) BLOB : java.lang.OutOfMemoryError with network JDBC driver (org.apache.derby.jdbc.ClientDriver)
Date Fri, 14 Jul 2006 00:17:44 GMT
Thanks, Bryan, I was not aware you could do that in JDBC -- I never 
programmed BLOBs in JDBC.  Definitely puts some constraints on the 
network logic.  If you can't cache it in memory, the network layer has 
to do something with that data, and know how to get back to it when you 
ask for it, and have protocol support for "can I have that BLOB again?" 
  That does get pretty tricky.


Bryan Pendleton wrote:
> David Van Couvering wrote:
>> I guess what I was assuming was, if the application goes off and does 
>> something else, we can notice that and either raise an exception 
>> ("you're not done with that BLOB column yet") or flush the rest of the 
>> BLOB data, since it's obvious they won't be getting back to it (e.g. 
>> if they send another query or do ResultSet.next(), it's clear they're 
>> done with the BLOB column). 
> Are you sure that's acceptable JDBC behavior? My (very old) copy of the
> JDBC spec says things like:
>   The standard behavior for a Blob instance is to remain valid until the
>   transaction in which it was created is either committed or rolled back.
> So if I do something like:
>   ResultSet rs = stmt.executeQuery("SELECT DATA FROM TABLE1");
>   rs.first();
>   Blob data = rs.getBlob("DATA");
>   InputStream blobStream = blob.getBinaryStream();
> I think I am supposed to be allowed to access blobStream quite some time 
> later,
> even if I do other things on the connection in the meantime.
> But I confess I don't do a lot of BLOB programming in JDBC, so maybe I'm
> manufacturing bogeymen that don't actually exist.
> thanks,
> bryan

View raw message