lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael McCandless <>
Subject Re: OutofMemory in large index
Date Fri, 13 Nov 2009 10:35:57 GMT
Interrupting optimize shouldn't cause any problems.  It should have no
effect on the index, except possibly the partially created files might
be orphan'd (left on disk but not referenced by the index), in which
case they'll be cleaned up the next time you open a writer on the

Still, running CheckIndex is always a good idea ;)


On Fri, Nov 13, 2009 at 5:17 AM, Ian Lea <> wrote:
>> I got OutOfMemoryError at
>> My index is 43G bytes.  Is that too big for Lucene ?
>> Luke can see the index has over 1800M docs, but the search is also out
>> of memory.
>> I use -Xmx1024M to specify 1G java heap space.
> 43Gb is not too big for lucene, but it certainly isn't small and that
> is a lot of docs.  Just give it more memory.
>> One abnormal thing is that I broke a running optimize of this index.
>> Is that can be a problem ?
> Possibly ...
>> If so, how can I fix an index after optimize process is broken.
> Probably depends on what you mean by broken.  Start with running
> org.apache.lucene.index.CheckIndex.  That can also fix some things -
> but see the warning in the javadocs.
> --
> Ian.
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message