jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ross.Dy...@ipaustralia.gov.au
Subject RE: Jackrabbit 2.2.5 - loss of data [SEC=UNCLASSIFIED]
Date Thu, 22 Dec 2011 23:11:01 GMT
can you put a println in the jackrabbit code and confirm the expected path 
when the exception is thrown?



From:   "Shah, Sumit (CGI Federal)" <Sumit.Shah@cgifederal.com>
To:     "users@jackrabbit.apache.org" <users@jackrabbit.apache.org>
Date:   21/12/2011 02:45 AM
Subject:        RE: Jackrabbit 2.2.5 - loss of data [SEC=UNCLASSIFIED]



Thanks Ross. It seems like the content is present on the filesystem. I can 
see the old documents in the repository/datastore folders. But the link 
between the Jackrabbit metadata (ex: path) and the content seems to be 
broken. Any reason on why this would happen?

Does Jackrabbit use UUIDs internally to store the metadata and the content 
itself?

Thanks
Sumit

From: Ross.Dyson@ipaustralia.gov.au [mailto:Ross.Dyson@ipaustralia.gov.au]
Sent: Monday, December 19, 2011 9:20 PM
To: users@jackrabbit.apache.org
Cc: users@jackrabbit.apache.org
Subject: Re: Jackrabbit 2.2.5 - loss of data [SEC=UNCLASSIFIED]

This looks suspiciously like a problem I have had before, where somebody 
writes a script to delete files that look like temp files, no file 
extensions, over a month old.  I had one that was deleting classes created 
at runtime, so each morning there was a good chance of getting classloader 
errors.

Best of luck.



From:        "Shah, Sumit (CGI Federal)" <Sumit.Shah@cgifederal.com>
To:        "users@jackrabbit.apache.org" <users@jackrabbit.apache.org>
Date:        20/12/2011 11:58 AM
Subject:        Jackrabbit 2.2.5 - loss of data
________________________________



Hi All,

I am running into a serious issue. It seems like I am unable to retrieve 
documents from Jackrabbit that are more than a month old. I get the 
following error:

"JCR Action 'Get stream' cannot be performed because the provided path 
does not exist"

I am running Jackrabbit in standalone mode and also in a clustered 
environment. I am seeing the same issue on both. When does this happen? Is 
there a self initiated process that cleans up the data within Jackrabbit? 
What are the possible resolutions to this?

I would appreciate any help on this.

Thanks
Sumit


Mime
View raw message