lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley" <>
Subject Re: restoring a corrupt index?
Date Sat, 10 Nov 2007 21:20:03 GMT
On Nov 10, 2007 4:01 PM, Ryan McKinley <> wrote:
> Using solr, we have been running an indexing process for a while and
> when I checked on it today, it spits out an error:
> java.lang.RuntimeException:
> /path/to/index/_cf9.fnm (No such file or directory)
>         at org.apache.solr.core.SolrCore.getSearcher(
>         at org.apache.solr.core.SolrCore.getSearcher(
> Looking through the archives, it looks like we are up a creek.
> Any thoughts on what could have caused this?  The log files contains
> some 'too many open files' errors, I can't tell if that corresponds with
> when the index went bad though.

Yup... that would most likely be it.

> the startup script includes:
>   ulimit -n 100000
> which seems generous, no?

The kernel may have a lower limit.

> it is a 22GB index, ls -l | wc shows 180K files (oh my)

I don't think any index with a normal mergeFactor should have that many files.
Most of these files are probably unreferenced by the current index but
haven't been cleaned up due to the errors with file descriptors.

> So my questions:
> 1. Anything I can do to use this index while I rebuild another? (takes a
> long time!)

Doubt it... you would never be sure if the index was correct.

> 2. Does the ulimit number explain how the index got corrupted?  If so,
> it seems like a problem.

I think newest lucene versions would prevent this with
lucene_autocommit=false.  A new segments file (the file which
references all other files in the current index) is not written until
a close of the writer.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message