lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Schultz <rob...@cosmicrealms.com>
Subject Any problems with a failed IndexWriter optimize call?
Date Mon, 01 Aug 2005 03:20:36 GMT
Hello! I am using Lucene 1.4.3

I'm building a Lucene index, that will have about 25 million documents 
when it is done.
I'm adding 250,000 at a time.

Currently there is about 1.2Million in there, and I ran into a problem.
After I had added a batch of 250,000 I go a 'java.lang.outOfMemory' 
threw by writer.optimize(); (a standard IndexWriter)

The exception caused my program to quit out, and it didn't call 
'writer.close();'

First, with it dying in the middle of an .optimize() is there any chance 
my index is corrupted?

Second, I know I can remove the /tmp/lucene*.lock file to remove the 
lock in order to add more, but is it safe to do that?

I've since figured out that I can pass -Xmx to the 'java' program in 
order to increase the maximum amount of RAM.
It was using the default of 64M, I plan on increasing that to 175M to 
start with.
That should solve the memory problems (I can allocate more if necessary 
down the line).

Lastly, when I go back, open it again, and add another 250,000 and then 
call optimize again, will a failed previous optimize hurt the index at all?



---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message