lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rob Staveley (Tom)" <rstave...@seseit.com>
Subject RE: Avoiding java.lang.OutOfMemoryError in an unstored field
Date Tue, 06 Jun 2006 09:39:13 GMT
I answered too quickly too :-)

The QA folk seem to reckon that a 132MB plain text file with no white space
is where it falls over. There are some accountancy e-mails with attachments
of ~170Mb like this, which we need to be able to field.

How would I go about flushing the IndexWriter? 

-----Original Message-----
From: karl wettin [mailto:kalle@snigel.net] 
Sent: 06 June 2006 10:16
To: java-user@lucene.apache.org
Subject: Re: Avoiding java.lang.OutOfMemoryError in an unstored field

On Tue, 2006-06-06 at 10:11 +0100, Rob Staveley (Tom) wrote:
> Sometimes I need to index large documents. I've got just about as much 
> heap as my application is allowed (-Xmx512m) and I'm using the 
> unstored org.apache.lucene.document.Field constructed with a 
> java.io.Reader, but I'm still suffering from 
> java.lang.OutOfMemoryError when I index some large documents. Are 
> org.apache.lucene.document.Field and 
> org.apache.lucene.document.Document always loaded entirely in memory?

Sorry, I answered to quick.

When you create the index? You might want to flush the IndexWriter more
often than normal. How large is really large documents? 


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org

Mime
View raw message