jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christoph Kiehl (JIRA)" <j...@apache.org>
Subject [jira] Commented: (JCR-793) WRONG use of BLOB datatype with MySQL in node_data column of the repository database
Date Thu, 15 Mar 2007 13:10:09 GMT

    [ https://issues.apache.org/jira/browse/JCR-793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12481144
] 

Christoph Kiehl commented on JCR-793:
-------------------------------------

I would suggest to not only change the ddl but to also throw an suitable exception if the
serialized node state is to big for the blob column. This should prevent data corruption in
the first place

> WRONG use of BLOB datatype with MySQL in node_data column of the repository database
> ------------------------------------------------------------------------------------
>
>                 Key: JCR-793
>                 URL: https://issues.apache.org/jira/browse/JCR-793
>             Project: Jackrabbit
>          Issue Type: Improvement
>          Components: core
>    Affects Versions: 1.1, 1.1.1, 1.2.1, 1.2.2, 1.2.3
>         Environment: MySql 4.1.16
>            Reporter: mtombesi
>            Priority: Critical
>
> Working with MySQL and jackrabbit ver 1.1, 1.2.1, 1.2.2 the datatype used to store node
hierarchy data is a BLOB which only has space for 64K. Early an overflow occurs (just after
1800 child nodes added) and data in MySQL becomes corrupted, without the DBMS even raises
any error on commit. It simply truncates the data on blob size (64K) during serialization.
Thus on deserialization phase, the node tree is corrupted...
> A simple solution would be to use LONGBLOB for datatypes in "mysql.ddl".
> I suggest to modify the file in jackrabbit-core.jar: "org/apache/jackrabbit/core/persistence/db/mysql.ddl
> change:
>         create table ${schemaObjectPrefix}NODE (NODE_ID char(36) not null, NODE_DATA
blob not null)
>         create table ${schemaObjectPrefix}PROP (PROP_ID varchar(255) not null, PROP_DATA
blob not null)
> with
>         create table ${schemaObjectPrefix}NODE (NODE_ID char(36) not null, NODE_DATA
longblob not null)
>         create table ${schemaObjectPrefix}PROP (PROP_ID varchar(255) not null, PROP_DATA
longblob not null)
>         
> This is a very critical problem... I lost data in production stage
> commit changes in next fixes realeased

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message