lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lokesh Bajaj <>
Subject Re: Problem with deleting and optimizing index
Date Sun, 24 Jul 2005 17:23:01 GMT
Actually, you should probably not let your index grow beyond one-third the size of your disk.
a] You start of with your original index
b] During optimize, Lucene will initially write out files in non-compound file format.
c] Lucene will than combine the non-compound file format into the compound file format.
So before producing your final index, the disk usage can be three times the size of your final
index. You can get around it by not trying to use the compound file format - but various people
have mentioned other issues with having too many files open errors.

Peter Kim <> wrote:
Hi all,

I have a problem related to index size and deleting and optimizing. So
from reading various sources online, it seems as though the size of the
Lucene index should become no larger than half the size of the disk
since during optimization, the size of the index can ballon to double
the original unoptimized size. Unfortunately, I've allowed my index to
grow past half the size of the disk (well, actually I only have 8% disk
capacity remaining). 

So, I can delete a chunk of documents by using IndexReader.delete() but
disk space won't actually be freed up until I run optimize(). But
running optimize() may cause the index to grow until the disk runs out
of space. 

What should I do? Should I just not optimize and believe that adding new
documents will automatically replace the space taken up by the deleted
documents? Or is my only option to nuke the index and remember next time
to not allow the index to grow to more than half the size of the disk?


To unsubscribe, e-mail:
For additional commands, e-mail:

  • Unnamed multipart/alternative (inline, 8-Bit, 0 bytes)
View raw message