jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefan Guggisberg" <stefan.guggisb...@gmail.com>
Subject Re: DerbyPersistence Error
Date Wed, 14 Mar 2007 14:51:29 GMT
hi sridhar,

On 3/13/07, Sridhar Raman <sridhar.raman@gmail.com> wrote:
> I had posted earlier on my problem when I tried importing a huge file (25000
> records, 16MB XML file).  The log file used to shoot up to 4GB in size,
> while the workspace folder used to become 8GB in size.  I managed to find
> out what were the error messages that were being popped into the status log
> file, right after I do the session.save().  They are these:
>
> 33406033 [main] ERROR
> org.apache.jackrabbit.core.persistence.db.DatabasePersistenceManager -
> failed to write node references: debd7319-5b6a-494c-8686-31cbafdbc497
> ERROR 22001: A truncation error was encountered trying to shrink BLOB
> 'XX-RESOLVE-XX' to length 1048576.

this can happen when you have a node with a lot of references (properties of
type REFERENCE pointing to it). node references are stored per node in a
'blob' column. the default size for a blob type in derby is 1 mb. this allows
for roughly 10k-20k references per node (depending on the length of the
reference property names).

you can increase this limit by making e.g. the following change in the
derby.ddl file:

change

     create table ${schemaObjectPrefix}REFS (NODE_ID char(36) not
null, REFS_DATA blob not null)

to

     create table ${schemaObjectPrefix}REFS (NODE_ID char(36) not
null, REFS_DATA blob(5m) not null)

however, be aware that jackrabbit's current db persistence model is
not optimized
for nodes having huge lists of referers (say >10k references/node) .

>     at org.apache.derby.iapi.error.StandardException.newException(Unknown
> Source)
>     at org.apache.derby.iapi.types.SQLBinary.checkHostVariable(Unknown
> Source)
>     ...
>     ...
>
> 33424034 [main] ERROR
> org.apache.jackrabbit.core.persistence.db.DatabasePersistenceManager -
> failed to write property state: 733a8ba0-75e6-4390-8613-b10726cd5e75/{
> http://www.jcp.org/jcr/1.0}uuid
> ERROR 23505: The statement was aborted because it would have caused a
> duplicate key value in a unique or primary key constraint or unique index
> identified by 'DEMOFULL_PROP_IDX' defined on 'DEMOFULL_PROP'.
>     at org.apache.derby.iapi.error.StandardException.newException(Unknown
> Source)
>     ...
>     ...
>
> 33424034 [main] ERROR
> org.apache.jackrabbit.core.persistence.db.DatabasePersistenceManager -
> failed to write property state: 733a8ba0-75e6-4390-8613-b10726cd5e75/{
> http://www.jcp.org/jcr/1.0}uuid
> ERROR 23505: The statement was aborted because it would have caused a
> duplicate key value in a unique or primary key constraint or unique index
> identified by 'DEMOFULL_PROP_IDX' defined on 'DEMOFULL_PROP'.

there was a similar issue on jira that was caused by 2 or more threads
trying to
concurrently create the same property on the same node.
for more details please see: https://issues.apache.org/jira/browse/JCR-721

the only explanations i can currently come up with for the 'duplicate key'
issue are:

1. it's a concurrency issue (JCR-721)
2. the data got somehow corrupted, e.g. due to abnormal jvm termination
    (crash, power outage etc).
3. multiple repository instances accessing the same derby instance.

2. is imo very unlikely since the changes are committed within a jdbc
transaction
and i trust derby to handle those correctly.

wrt 1. and 3. we would need to know more about your setup and test code.
could you perhaps provide more details such as your configuration,
code samples and sample data?

cheers
stefan

>
> Any pointers on how to fix this? Any help would be really appreciated!
>
> Thanks in advance,
> Sridhar
>

Mime
View raw message