db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From TomohitoNakayama <tomon...@basil.ocn.ne.jp>
Subject Re: Negative test case (Re: [jira] Commented: (DERBY-326) Improve streaming of large objects for network server and client)
Date Mon, 13 Feb 2006 12:28:06 GMT
Hello.

I have enlarged the size of lob data and try the test again.
The result was that update was failed because of dead lock.

This error is not around streaming and not what I expected to cause.

I came to think that it is difficult to make negative test situation for 
streaming only with manipulating dbms via jdbc api.
// If it was possible, it may be just a bug to be fixed ....

Now I think to prepare debug code to cause exception while streaming 
using SanityManager.

Best regards.


Bryan Pendleton wrote:

>>> If the reader is in TRANSACTION_READ_UNCOMMITTED isolation mode and 
>>> then another connection updated the LOB,  the reader  should get an 
>>> IOException. on the next read.
>>>
>> Reading your comment above, I wrote test code attached to this mail, 
>> and tried executing it with my latest patch, which is not submitted yet.
>> Then I found no Exception happens at all.
>>
>> Are there any misunderstanding you in my test code ?
>> Or is this unlucky success ?
>
>
> I think your test program is good, but apparently it does not provoke
> the exception that Kathey has in mind.
>
> It occurs to me that, with your new code, lob data should be fetched
> from the server in chunks of approximately 32K, so I think that you
> may need to incorporate that information into your test program.
>
> That is, when the first connection goes back to read the second set of
> 256 bytes of blob data from the input stream, it might just be returning
> cached data from the first 32K segment that was returned from the server,
> and it might be that it won't encounter the server-side exception until
> it exhausts that first 32K segment and returns to the server for more 
> data.
>
> What if you do something like this:
>
> 1) Initialize the blob column to contain a lot of data: e.g., 128K bytes
> 2) Have the first connection fetch the first 256 bytes, as you do now.
> 3) Have the second transaction update the blob to replace it with a
>    very short value: e.g., 500 bytes of data total.
> 4) Then, have the first transaction attempt to fetch *all* the blob data.
>
> Here are some things that I think *might* happen at step 4:
> 1) The first transaction might get an "end-of-blob" after 500 bytes 
> total.
> 2) The first transaction might get all 256K bytes of the original blob.
> 3) The first transaction might get 32K bytes of the original blob, then
>    get an "end-of-blob"
> 4) The first transaction might get 32K bytes of the original blob, then
>    get an IO Exception (I think this may be the behavior that Kathey was
>    trying to expose).
>
> thanks,
>
> bryan
>
>
>

-- 
/*

        Tomohito Nakayama
        tomonaka@basil.ocn.ne.jp
        tomohito@rose.zero.ad.jp
        tmnk@apache.org

        Naka
        http://www5.ocn.ne.jp/~tomohito/TopPage.html

*/ 


Mime
View raw message