db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dag H. Wanvik (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (DERBY-3650) internal multiple references from different rows to a single BLOB/CLOB stream leads to various errors when second reference used.
Date Wed, 09 Dec 2009 20:14:18 GMT

    [ https://issues.apache.org/jira/browse/DERBY-3650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788280#action_12788280
] 

Dag H. Wanvik edited comment on DERBY-3650 at 12/9/09 8:13 PM:
---------------------------------------------------------------

Ok, bear with me if I misunderstand issues here, still trying to grok
this, but I'll weigh in just to get some discussion going.

I looked at the clone methods, and it seems to me that originally
there was getClone, and that cloneObject was introduced later to avoid
always materializing large objects into many copies. The naming is not
good, the names imply the same behavior, I think. cloneObject is
shallow in the sense that it does not clone the *value*, nor does it
clone the *stream state*, if any. (Btw, the implementation of
SQLChar#cloneObject could be simplified to look the same as
SQLBinary#cloneObject).

Now, if I understand correctly, the new method, CopyForRead is slightly
*less shallow*, in that you now also copy the stream state.

For non-stream data types, cloneObject defaults to getClone (deep
copy).

I would suggest we change the names here to clarify code and APIs, the
better to reflect the shallowness of the cloning.

        cloneDeep (old getClone; clones even *value*, share nothing)
        cloneHalfDeep (new CopyForRead, clones even stream state,
                            but not value, which is still shared)
        cloneShallow (old cloneObject, clones just "holder", shares
                      stream/stream state)

If the code really needs cloneShallow also, after cloneHalfDeep is
added, I don't know, if not, I'd call cloneHalfDeep cloneShallow
instead ;)

Now, for modification, what to use? I guess that depends on what
semantics are desired/at what level in the code you are..? Maybe we
could just do COW semantics?  I.e. use cloneHalfDeep until update is
attempted and only then do a full deep clone? (by overloading stream
class perhaps) Then the updating of the deep copy could proceed until
the column is actually updated without affecting the other aliases?


      was (Author: dagw):
    Ok, bear with me if I misunderstand issues here, still trying to grok
this, but I'll weigh in just to get some discussion going.

I looked at the clone methods, and it seems to me that originally
there was getClone, and that cloneObject was introduced later to avoid
always materializing large objects into many copies. The naming is not
good, the names imply the same behavior, I think. cloneObject is
shallow in the sense that it does not clone the *value*, nor does it
clone the *stream state*, if any. (Btw, the implementation of
SQLChar#cloneObject could be simplified to look the same as
SQLBinary#cloneObject).

Now, if I understand correctly, the new method, CopyForRead is slightly
*less shallow*, in that you now also copy the stream state.

For non-stream data types, cloneObject defaults to getClone (deep
copy).

I would suggest we change the names here to clarify code and APIs, the
better to reflect the shallowness of the cloning.

        cloneDeep (old getClone; clones even *value*, share nothing)
        cloneHalfDeep (new CopyForRead, clones even stream state,
                            but not value, which is still shared)
        cloneShallow (old cloneObject, clones just "holder", shares
                      stream/stream state)

If the code really needs cloneShallow also, after cloneHalfDeep is
added, I don't know, if not, I'd call cloneHalfDeep cloneShallow
instead ;)

Now, for modification, what to use? I guess that depends on what
semantics are desired/at what level in the code you are..? If the
change should logically be reflected in other aliases (alias to same
column value), say after an update of a column, I guess we could
invalidate/close existing streams to that value if any exists so as to
avoid inconsistencies? Or maybe we could just do COW semantics?
I.e. use cloneHalfDeep until update is attempted and only then do
a full deep clone? (by overloading stream class perhaps) Then the
updating of the deep copy could proceed until the column is actually
updated without affecting the other aliases?

  
> internal multiple references from different rows to a single BLOB/CLOB stream leads to
various errors when second reference used.
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-3650
>                 URL: https://issues.apache.org/jira/browse/DERBY-3650
>             Project: Derby
>          Issue Type: Bug
>          Components: Network Client, SQL, Store
>    Affects Versions: 10.3.3.0, 10.4.1.3
>         Environment: Mac OSX 10.4
> JDK 1.5.0_13
> Hibernate EntityManager 3.2.1
>            Reporter: Golgoth 14
>         Attachments: cloning-methods.html, derby-3650-preliminary_2_diff.txt, derby-3650-preliminary_diff.txt,
derby-3650_tests_diff.txt, Derby3650EmbeddedRepro.java, Derby3650FullClientRepro.java, Derby3650FullRepro.java,
Derby3650Repro.java, DerbyHibernateTest.zip, testdb.zip, traces_on_FormatIdStream_alloc.txt,
UnionAll.java
>
>
> Derby + Hibernate JPA 3.2.1 problem on entity with Blob/Clob
> Hi,
> I'm using Derby in Client - Server mode with Hibernate JPA EJB 3.0.
> When a query on an entity containing a Clob and some joins on other entites is executed,
an exception with the following message is thrown:
>   XJ073: The data in this BLOB or CLOB is no longer available.  The BLOB/CLOB's transaction
may be committed, or its connection is closed.
> This problem occurs when the property "hibernate.max_fetch_depth" is greater than 0.
> When hibernate.max_fetch_depth=0, the query works.
> If Derby is configured in embedded mode, the query works independently of the value of
hibernate.max_fetch_depth.
> On the Hibernate's documentation, the advised value of hibernate.max_fetch_depth is 3.
> Could you explain me if I made something wrong ?
> Thank you.
> Stephane

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message