lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject RE: Indexing large files? - No answers yet...
Date Fri, 11 Sep 2009 14:23:52 GMT
Thanks Dan!

I upgraded my JVM from .12 to .16.  I'll test with that.

I've been testing by setting many IndexWriter parameters manually to see
where the best performance is.  Then net result was just delaying the

The scenario is a test with an empty index.  I have a 5 MB file with
800,000 unique terms in it. I make one document with the file and then
add that document to the IndexWriter for indexing.  At a heap space of
64 MB the OOM occurs almost immediately when the
IndexWriter.add(document) is called. If I increase the heap space to 128
MB indexing is successful taking less than 5 seconds to complete. So...

I commit only once.
The OOM occurs before I can close the IndexWriter at 64 MB heap space. I
close and optimize on the successful 128 MB test.
I'm only indexing one document.
I use Luke all the time for other indexes, but after the OOM on this
test not even the "Force Open" will get me into the index.
There are no searches going on.  This is a test to try and index one 5
MB file to an empty index.

So you're probably asking why I don't just increase the heap space and
be happy.  The answer is that the larger the file the more heap space is
needed.  The system I'm developing doesn't have the heap space required
for the potential large files that the end user might try to index.  So
I would like a way to index the file as one document using a small
memory footprint.  It would be nice to be able to "throttle" the
indexing of large files to control memory usage.


-----Original Message-----
] On Behalf Of Dan OConnor
Sent: Friday, September 11, 2009 8:13 AM
Subject: RE: Indexing large files? - No answers yet...


My first suggestion would be to update your JVM to the latest version
(or at least .14). There were several garbage collection related issues
resolved in version 10 - 13 (especially dealing with large heaps).

Next, your IndexWriter parameters would help figure out why you are
using so much RAM

How often are you calling commit?
Do you close your IndexWriter after every document?
How many documents of this size are you indexing?
Have you used luke to look at your index?
If this is a large index, have you optimized it recently?
Are there any searches going on while you are indexing?


-----Original Message-----
From: [] 
Sent: Friday, September 11, 2009 7:57 AM
Subject: RE: Indexing large files? - No answers yet...

This issue is still open.  Any suggestions/help with this would be
greatly appreciated.



-----Original Message-----
] On Behalf Of
Sent: Monday, August 31, 2009 10:28 AM
Subject: Indexing large files?



I'm working with Lucene 2.4.0 and the JVM (JDK 1.6.0_07).  I'm
consistently receiving "OutOfMemoryError: Java heap space", when trying
to index large text files.


Example 1: Indexing a 5 MB text file runs out of memory with a 16 MB
max. heap size.  So I increased the max. heap size to 512 MB.  This
worked for the 5 MB text file, but Lucene still used 84 MB of heap space
to do this.  Why so much?


The class FreqProxTermsWriterPerField appears to be the biggest memory
consumer by far according to JConsole and the TPTP Memory Profiling
plugin for Eclipse Ganymede.


Example 2: Indexing a 62 MB text file runs out of memory with a 512 MB
max. heap size.  Increasing the max. heap size to 1024 MB works but
Lucene uses 826 MB of heap space while performing this.  Still seems
like way too much memory is being used to do this.  I'm sure larger
files would cause the error as it seems correlative.


I'm on a Windows XP SP2 platform with 2 GB of RAM.  So what is the best
practice for indexing large files?  Here is a code snippet that I'm


// Index the content of a text file.

      private Boolean saveTXTFile(File textFile, Document textDocument)
throws CIDBException {           


            try {             


                  Boolean isFile = textFile.isFile();

                  Boolean hasTextExtension =


                  if (isFile && hasTextExtension) {


                        System.out.println("File " +
textFile.getCanonicalPath() + " is being indexed");

                        Reader textFileReader = new

                        if (textDocument == null)

                              textDocument = new Document();

                        textDocument.add(new Field("content",



            } catch (FileNotFoundException fnfe) {


                  return false;

            } catch (CorruptIndexException cie) {

                  throw new CIDBException("The index has become

            } catch (IOException ioe) {


                  return false;


            return true;




Thanks much,




To unsubscribe, e-mail:
For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message