db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kathey Marsden <kmarsdende...@sbcglobal.net>
Subject Re: DERBY-255 Closing a resultset after retrieving a large > 32K alue with Network Server does not release locks
Date Thu, 26 May 2005 12:34:52 GMT
Kathey Marsden wrote:

>Currently, even though network server materializes the LOB to the
>client, it uses  getBlob or  getClob to retrieve the large object. This
>holds locks until the end of the transaction. 
>
>I would like to change Network Server to:  
>    - Use getCharacterStream and getBinaryStream instead of getClob
>       and getBlob to avoid holding the locks after the result set is
>closed..
>    - Always  use 8 bytes for the FD:OCA place holder so we don't have
>to calculate the length
>
>Does anyone see any issues with this, especially for other clients such
>as ODBC?
>  
>
Focusing on Blobs first ....

Well, it looks like  the  DDMWriter.writeScalarStream() logic is heavily
dependent on the length of the LOB.  ,  I changed the extended length
number of bytes to always be 8, but  it looks like I still need the
length of the InputStream before I send it.  Actually from a
specification point of view I don't think that is required and the
length is not written out to the stream, but I am having trouble
figuring out how to rework writeScalarStream and company to eliminate
the need for it.    Of particular concern is padScalarStreamForError()
which pads out the full stream length in the event of an error.

 Does anyone
    a) Have any ideas on how to rework writeScalarStream  and company to
eliminate the need for the length or ..
     b) Have time today to walk through this code with me on IRC, to
better understand what needs to be done. or ...
     c) Have a better idea all together

In the punt category I have two possible solutions.

    1)  I have a fix in a maintenance branch of an old Cloudscape
release which I could port.  This does getString() or getBytes() to get
the value and then has the associated length.
    2)  I'd actually have to call getBinaryStream twice, once to get the
length (with available(), skip()) and again to stream
          the data to the client.  Maybe this is not so bad since
Blob.length()  does something similar for large values, but actually
reads the data,  So I suppose for large values this might be faster than
what we do now.

Thanks for any ideas you have.

Kathey




Mime
View raw message