lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael McCandless <>
Subject Re: OutOfMemoryError using IndexWriter
Date Wed, 24 Jun 2009 09:52:40 GMT
Hmm -- I think your test env (80 MB heap, 50 MB used by app + 16 MB
IndexWriter RAM buffer) is a bit too tight.  The 16 MB buffer for IW
is not a hard upper bound on how much RAM it may use.  EG when merges
are running, more RAM will be required, if a large doc brought it over
the 16 MB limit it will consume more, etc.

~3 MB used by PostingList is reasonable.

If after fixing the problem in your code, with a larger heap size
you're still running out of RAM, then please post the full histogram
from the resulting heap dump at which point the offender will be

Or, can you make the problem happen with a smallish test case?


On Wed, Jun 24, 2009 at 5:37 AM, stefan<> wrote:
> Hi,
> I do not set a RAM Buffer size, I assume default is 16MB.
> My server runs with 80MB heap size, before starting lucene about 50MB is used. In a production
environment I run in this problem with heap size set to 750MB with no other activity on the
server (nighttime), though since then I diagnosed some problem with my code as well. I just
reproduced it with 80MB but I guess I can reproduce it with 100MB heap as well, just takes
> Here is the stack, I keep the dump for
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to c:\ ...
> Heap dump file created [97173841 bytes in 3.534 secs]
> ERROR lucene.SearchManager       - Failure in index daemon:
> java.lang.OutOfMemoryError: Java heap space
>        at java.util.HashSet.<init>(
>        at org.apache.lucene.index.DocumentsWriter.initFlushState(
>        at org.apache.lucene.index.DocumentsWriter.closeDocStore(
>        at org.apache.lucene.index.IndexWriter.flushDocStores(
>        at org.apache.lucene.index.IndexWriter.doFlush(
>        at org.apache.lucene.index.IndexWriter.flush(
>        at org.apache.lucene.index.IndexWriter.closeInternal(
>        at org.apache.lucene.index.IndexWriter.close(
>        at org.apache.lucene.index.IndexWriter.close(
> Heap Histogram shows:
> class org.apache.lucene.index.FreqProxTermsWriter$PostingList   116736 (instances)  
   3268608 (size)
> Well, something I should do differently ?
> Stefan
> -----Ursprüngliche Nachricht-----
> Von: Michael McCandless []
> Gesendet: Mi 24.06.2009 10:48
> An:
> Betreff: Re: OutOfMemoryError using IndexWriter
> How large is the RAM buffer that you're giving IndexWriter?  How large
> a heap size do you give to JVM?
> Can you post one of the OOM exceptions you're hitting?
> Mike
> On Wed, Jun 24, 2009 at 4:08 AM, stefan<> wrote:
>> Hi,
>> I am using Lucene 2.4.1 to index a database with less than a million records. The
resulting index is about 50MB in size.
>> I keep getting an OutOfMemory Error if I re-use the same IndexWriter to index the
complete database. This is though
>> recommended in the performance hints.
>> What I now do is, every 10000 Objects I close the index (and every 50 close actions
optimize it) and create a new
>> IndexWriter to continue. This process works fine, but to me seems hardly the recommended
way to go.
>> I've been using jhat/jmap as well as Netbeans profiler and am fairly sure that this
is a problem related to Lucene.
>> Any Ideas - or post this to Jira ? Jira has quite a few OutOfMemory postings but
they all seem closed in Version 2.4.1.
>> Thanks,
>> Stefan
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail:
>> For additional commands, e-mail:
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message