lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless" <>
Subject Re: Lucene 2.2, NFS, Lock obtain timed out
Date Tue, 03 Jul 2007 11:21:14 GMT

"Patrick Kimber" <> wrote:

> I am using the NativeFSLockFactory.  I was hoping this would have
> stopped these errors.

I believe this is not a locking issue and NativeFSLockFactory should
be working correctly over NFS.

> Here is the whole of the stack trace:
> Caused by:
> /mnt/nfstest/repository/lucene/lucene-icm-test-1-0/segments_n (No such
> file or directory)
> 	at Method)
> 	at<init>(
> 	at$FSIndexInput$Descriptor.<init>(
> 	at$FSIndexInput.<init>(
> 	at$FSIndexInput.<init>(
> 	at
> 	at
> 	at org.apache.lucene.index.IndexFileDeleter.<init>(
> 	at org.apache.lucene.index.IndexWriter.init(
> 	at org.apache.lucene.index.IndexWriter.<init>(
> 	at com.subshell.lucene.indexaccess.impl.IndexAccessProvider.getWriter(
> 	at com.subshell.lucene.indexaccess.impl.LuceneIndexAccessor.getWriter(
> 	at
> 	... 13 more

OK, indeed the exception is inside IndexFileDeleter's initialization
(this is what I had guessed might be happening).

> I have added more logging to my test application.  I have two servers
> writing to a shared Lucene index on an NFS partition...
> Here is the logging from one server...
> [10:49:18] [DEBUG] LuceneIndexAccessor closing cached writer
> [10:49:18] [DEBUG] ExpirationTimeDeletionPolicy onCommit() delete
> [segments_n]
> and the other server (at the same time):
> [10:49:18] [DEBUG] LuceneIndexAccessor opening new writer and caching it
> [10:49:18] [DEBUG] IndexAccessProvider getWriter()
> [10:49:18] [ERROR] DocumentCollection update(DocumentData)
> I/O Error: Cannot add the
> document to the index.
> [/mnt/nfstest/repository/lucene/lucene-icm-test-1-0/segments_n (No
> such file or directory)]
>     at
> I think the exception is being thrown when the IndexWriter is created:
> new IndexWriter(directory, false, analyzer, false, deletionPolicy);
> I am confused... segments_n should not have been touched for 3 minutes
> so why would a new IndexWriter want to read it?

Whenever a writer is opeened, it initializes the deleter
(IndexFileDeleter).  During that initialization, we list all files in
the index directory, and for every segments_N file we find, we open it
and "incref" all index files that it's using.  We then call the
deletion policy's "onInit" to give it a chance to remove any of these
commit points.

What's happening here is the NFS directory listing is "stale" and is
reporting that segments_n exists when in fact it doesn't.  This is
almost certainly due to the NFS client's caching (directory listing
caches are in general not coherent for NFS clients, ie, they can "lie"
for a short period of time, especially in cases like this).

I think this fix is fairly simple: we should catch the
FileNotFoundException and handle that as if the file did not exist.  I
will open a Jira issue & get a patch.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message