jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cédric Chantepie (JIRA) <j...@apache.org>
Subject [jira] Commented: (JCR-2492) Garbage Collector remove data for active node
Date Tue, 16 Feb 2010 13:27:27 GMT

    [ https://issues.apache.org/jira/browse/JCR-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12834205#action_12834205
] 

Cédric Chantepie commented on JCR-2492:
---------------------------------------

I'm still able to reproduce this trouble with the 42Gb datastore.
I've been able to do it once with a smaller datastore, I will try to figure out what is exactly
its cause.

It seems that jackrabbit-core used by my RAR is 1.4 (not 1.4.5), even if other libs are 1.4.5.

Getting jackrabbit-1.4 from SVN, I've some doubt about something in org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager::getAllNodeIds
:
--> Statement stmt = connectionManager.executeStmt(sql, keys, false, maxCount + 10);
With "+ 10", infinite maxCount (0) is turned in 10, so as far as I understand, getAllNodeIds
asks its connectionManager to get all nodes, but with a query whose result is limited to 10
rows.

If I'm right, GarbageCollector using getAllNodesIds from given IterablePersistenceManager
(scanPersistenceManagers) doesn't "really" get all nodes (due to rows limit), and so only
some nodes are marked (date updated). Nodes not marked (not included in retrieved rows), are
then considered as removable by the deleteUnused method of GarbageCollector.

> Garbage Collector remove data for active node
> ---------------------------------------------
>
>                 Key: JCR-2492
>                 URL: https://issues.apache.org/jira/browse/JCR-2492
>             Project: Jackrabbit Content Repository
>          Issue Type: Bug
>    Affects Versions: core 1.4.5
>         Environment: Linux 2.6.x (gentoo or fedora), JDK 1.5 (sun or jrockit), JBoss
4.2.3.GA, Derby (10.4.1.3), PostgreSQL (8.1.11 or 8.0.3)
> * FileSystem = LocalFileSystem
> * custom AccessManager
> * PersistenceManager = PostgreSQLPersistenceManager
> * SearchIndex, textFilterClasses = ""
> * DataStore = FileDataStore (minLogRecord = 100)
>            Reporter: Cédric Chantepie
>            Priority: Critical
>
> When we use GarbageCollector on a 42Gb datastore, GarbageCollector erase all data.
> Back with node, none have any longer data : jcr:data was removed as data in datastore
no longer exist.
> On some smaller test repository, this trouble does not occur.
> We will try to update Jackrabbit version, but at least it could be "good" to be sure
what is really the trouble with GC in Jackrabbit 1.4.5 so that we can be sure that updating
it will really fix that.
> Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message