lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "vivek sar" <vivex...@gmail.com>
Subject Re: restoring a corrupt index?
Date Tue, 13 Nov 2007 18:53:19 GMT
We have seen similar exceptions (with Lucene 2.2) when were doing the
following mistakes,

1) Not closing the old searchers and re-creating a new one for every
new search (fixed it by closing the searcher every time, if you want
you could only one searcher instance as well)
2) Not having any jvm shutdown hook (in cases the JVM gets killled we
need to make sure the indexer is closed if it's in the middle of
indexing) - of course you would still need to avoid forceful killing
(kill -9).


Since, these two fixes we haven't seen this problem happening. I think
if the indexer is abruptly stopped while it's in progress the index
corruption can happen.

-vivek

On Nov 11, 2007 2:10 PM, Ryan McKinley <ryantxu@gmail.com> wrote:
>
> >
> > Ryan can you post the output of CheckIndex on your now-working index?
> > (1800 is still too many files I think, certainly after having
> > optimized).
> >
>
> ok, 1800 was wrong - that was from a botched attempt where I:
> 1. ran optimize on the broken 18K file index.  It crashed midway through.
> 2. run CheckIndex -fix on that
>
> When I:
> 1. run CheckIndex -fix
> 2. run Optimize
> it results in an index with 65 files (that seems normal)
>
>
> > Also, what steps finally allowed you to recover?  CheckIndex
> > (back-ported to 2.2) followed by optimize?
> >
>
> I did not back-port to 2.2, this converted 2.2 index to 2.3, and i
> shoved 2.3 libs into an exploded solr.war.  (just to see if it works)
> It works, but I'm better off going back to an older working version.
>
>
> >
> > I'm still baffled how Lucene 2.2 could ever produce a corrupt index
> > even on hitting descriptor limits or other exceptions.  I can see that
> > this could cause files to not be deleted properly, but, I can't see
> > how it can corrupt the index.
> >
>
> I'm not confident it is lucene's fault - the hardware has been flaky
> too.  But the 18K files and 'too many open files' error makes me
> suspicious.  Unfortunately, I can't quite grock when stuff started going
> wrong and how long it has been going on from my log files.
>
>
> > Ryan can you share any details of how you (Solr) is using Lucene?  Are
> > you using autoCommit=false?  I'd really love to get to the root cause
> > here.
> >
>
> I am using all standard solr config (copied from the example).  I am
> using solr's <autoCommit> to "commit" added documents every 30 secs.
>
>
> ryan
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message