db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kristian Waagan <kristian.waa...@oracle.com>
Subject Re: BLOB streaming
Date Fri, 18 Feb 2011 15:37:22 GMT
On 18.02.2011 14:59, Brett Wooldridge wrote:
> The question is, is it still fully materialized on the server before
> streaming to the client?

No, it's not.
Ideally, the only time a BLOB would be fully materialized is if 
ResultSet/Blob.getBytes() is called.
There used to be some exceptions (triggers and a few special queries?), 
but I believe at least most of them have been addressed by now.

Other considerations come into play when you are updating BLOBs, but in 
that case large BLOBs are stored intermediately on disk if required.


> Brett
> Sent from my iPhone
> On Feb 18, 2011, at 22:11, Knut Anders Hatlen<knut.hatlen@oracle.com>  wrote:
>> Brett Wooldridge<brett.wooldridge@gmail.com>  writes:
>>> Hi all,
>>> I just came across this in the manual, and if accurate raises some
>>> concerns for me:
>>> For applications using the client driver, if the stream is stored in a
>>> column of a type other than LONG VARCHAR or LONG VARCHAR FOR BIT DATA,
>>> the entire stream must be able to fit into memory at one time. Streams
>>> stored in LONG VARCHAR and LONG VARCHAR FOR BIT DATA columns do not
>>> have this limitation.
>>> This seems to imply that if I have a BLOB containing 1GB of data, and
>>> I'm using the client driver, the result cannot be streamed?
>>> Can this possibly be correct? �Given the apparent limit of VARCHAR of
>>> ~32K, is there no way to stream large data to a client?
>> A lot of work went into Derby 10.2 and Derby 10.3 to avoid the need to
>> materialize LOBs on the client, so I believe this statement isn't true
>> anymore. I've filed https://issues.apache.org/jira/browse/DERBY-5056 to
>> update the manual.
>> Thanks,
>> --
>> Knut Anders

View raw message