lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nico Krijnen (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-2729) Index corruption after 'read past EOF' under heavy update load and snapshot export
Date Tue, 09 Nov 2010 14:40:16 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12930142#action_12930142
] 

Nico Krijnen commented on LUCENE-2729:
--------------------------------------

{quote}
...spooky "other" exceptions...
{quote}

These are all 'WARN' level and all of them caused by non-critical timeouts in our code. All
caused by the system being under very heavy load needed to reproduce the bug.


{quote}
Would it be possible to instrument to Zoie code to note as the backup
process is copying each file in the snapshot, and at that point print
a listing of the directory?
{quote}

Will do, that is a good one. Then we know which files are being 'held' by the Zoie deletion
policy for the backup.


{quote}
Also, can you write to the log when Zoie applies deletes? (Looks like
it happens in proj.zoie.impl.indexing.internal.BaseSearchIndex.deleteDocs).
It's on applying deletes that the corruption is first detected, so, if
we log this event we can better bracket the period of time when the
corruption happened.
{quote}

Will do, but we also got the error while zoie was opening a new IndexWriter:

{code}
15:25:03,453 [proj.zoie.impl.indexing.internal.RealtimeIndexDataLoader@3d9e7719] 
    ERROR proj.zoie.impl.indexing.internal.LuceneIndexDataLoader - 
    Problem copying segments: read past EOF
java.io.IOException: read past EOF
    at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:154)
    at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
    at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:37)
    at org.apache.lucene.store.IndexInput.readInt(IndexInput.java:69)
    at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:245)
    at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:170)
    at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1127)
    at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:960)
    at proj.zoie.impl.indexing.internal.DiskSearchIndex.openIndexWriter(DiskSearchIndex.java:176)
    at proj.zoie.impl.indexing.internal.BaseSearchIndex.loadFromIndex(BaseSearchIndex.java:228)
    at proj.zoie.impl.indexing.internal.LuceneIndexDataLoader.loadFromIndex(LuceneIndexDataLoader.java:153)
    at proj.zoie.impl.indexing.internal.DiskLuceneIndexDataLoader.loadFromIndex(DiskLuceneIndexDataLoader.java:134)
    at proj.zoie.impl.indexing.internal.RealtimeIndexDataLoader.processBatch(RealtimeIndexDataLoader.java:172)
    at proj.zoie.impl.indexing.internal.BatchedIndexDataLoader$LoaderThread.run(BatchedIndexDataLoader.java:377)
{code}

I'll add a log there too.
My guess is that the 'read past EOF' is not really specific to applying deletes, but just
happens when the SegementInfos is loaded on a 0kb file.


{quote}
Does Zoie ever open an IndexReader or IndexWriter passing in an
existing commit point? Or does it always open the latest commit?
{quote}

I'll try to find out.


{quote}
The timestamps on the zero length files are particularly spooky - the
earliest ones are 15:21 (when first EOF is hit), but then also 15:47
and 15:49 on the others. It seems like on 3 separate occasions
something truncated the files.
{quote}

Indeed, I thought this was weird too.

> Index corruption after 'read past EOF' under heavy update load and snapshot export
> ----------------------------------------------------------------------------------
>
>                 Key: LUCENE-2729
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2729
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Index
>    Affects Versions: 3.0.1, 3.0.2
>         Environment: Happens on both OS X 10.6 and Windows 2008 Server. Integrated with
zoie (using a zoie snapshot from 2010-08-06: zoie-2.0.0-snapshot-20100806.jar).
>            Reporter: Nico Krijnen
>         Attachments: 2010-11-02 IndexWriter infoStream log.zip, LUCENE-2729-test1.patch
>
>
> We have a system running lucene and zoie. We use lucene as a content store for a CMS/DAM
system. We use the hot-backup feature of zoie to make scheduled backups of the index. This
works fine for small indexes and when there are not a lot of changes to the index when the
backup is made.
> On large indexes (about 5 GB to 19 GB), when a backup is made while the index is being
changed a lot (lots of document additions and/or deletions), we almost always get a 'read
past EOF' at some point, followed by lots of 'Lock obtain timed out'.
> At that point we get lots of 0 kb files in the index, data gets lots, and the index is
unusable.
> When we stop our server, remove the 0kb files and restart our server, the index is operational
again, but data has been lost.
> I'm not sure if this is a zoie or a lucene issue, so i'm posting it to both. Hopefully
someone has some ideas where to look to fix this.
> Some more details...
> Stack trace of the read past EOF and following Lock obtain timed out:
> {code}
> 78307 [proj.zoie.impl.indexing.internal.RealtimeIndexDataLoader@31ca5085] ERROR proj.zoie.impl.indexing.internal.BaseSearchIndex
- read past EOF
> java.io.IOException: read past EOF
>     at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:154)
>     at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
>     at org.apache.lucene.store.ChecksumIndexInput.readByte(ChecksumIndexInput.java:37)
>     at org.apache.lucene.store.IndexInput.readInt(IndexInput.java:69)
>     at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:245)
>     at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:166)
>     at org.apache.lucene.index.DirectoryReader.doCommit(DirectoryReader.java:725)
>     at org.apache.lucene.index.IndexReader.commit(IndexReader.java:987)
>     at org.apache.lucene.index.IndexReader.commit(IndexReader.java:973)
>     at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:162)
>     at org.apache.lucene.index.IndexReader.close(IndexReader.java:1003)
>     at proj.zoie.impl.indexing.internal.BaseSearchIndex.deleteDocs(BaseSearchIndex.java:203)
>     at proj.zoie.impl.indexing.internal.BaseSearchIndex.loadFromIndex(BaseSearchIndex.java:223)
>     at proj.zoie.impl.indexing.internal.LuceneIndexDataLoader.loadFromIndex(LuceneIndexDataLoader.java:153)
>     at proj.zoie.impl.indexing.internal.DiskLuceneIndexDataLoader.loadFromIndex(DiskLuceneIndexDataLoader.java:134)
>     at proj.zoie.impl.indexing.internal.RealtimeIndexDataLoader.processBatch(RealtimeIndexDataLoader.java:171)
>     at proj.zoie.impl.indexing.internal.BatchedIndexDataLoader$LoaderThread.run(BatchedIndexDataLoader.java:373)
> 579336 [proj.zoie.impl.indexing.internal.RealtimeIndexDataLoader@31ca5085] ERROR proj.zoie.impl.indexing.internal.LuceneIndexDataLoader
- 
>     Problem copying segments: Lock obtain timed out: org.apache.lucene.store.SingleInstanceLock@5ad0b895:
write.lock
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: org.apache.lucene.store.SingleInstanceLock@5ad0b895:
write.lock
>     at org.apache.lucene.store.Lock.obtain(Lock.java:84)
>     at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1060)
>     at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:957)
>     at proj.zoie.impl.indexing.internal.DiskSearchIndex.openIndexWriter(DiskSearchIndex.java:176)
>     at proj.zoie.impl.indexing.internal.BaseSearchIndex.loadFromIndex(BaseSearchIndex.java:228)
>     at proj.zoie.impl.indexing.internal.LuceneIndexDataLoader.loadFromIndex(LuceneIndexDataLoader.java:153)
>     at proj.zoie.impl.indexing.internal.DiskLuceneIndexDataLoader.loadFromIndex(DiskLuceneIndexDataLoader.java:134)
>     at proj.zoie.impl.indexing.internal.RealtimeIndexDataLoader.processBatch(RealtimeIndexDataLoader.java:171)
>     at proj.zoie.impl.indexing.internal.BatchedIndexDataLoader$LoaderThread.run(BatchedIndexDataLoader.java:373)
> {code}
> We get exactly the same behavour on both OS X and on Windows. On both zoie is using a
SimpleFSDirectory.
> We also use a SingleInstanceLockFactory (since our process is the only one working with
the index), but we get the same behaviour with a NativeFSLock.
> The snapshot backup is being made by calling:
> *proj.zoie.impl.indexing.ZoieSystem.exportSnapshot(WritableByteChannel)*
> Same issue in zoie JIRA:
> http://snaprojects.jira.com/browse/ZOIE-51

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message