lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael McCandless <>
Subject Re: recurrent IO/CPU peaks
Date Wed, 02 Mar 2011 11:01:00 GMT
On Tue, Mar 1, 2011 at 11:40 AM,  <> wrote:

> we developped a real time logging system. we index 4.5 millions
> events/day, spread over multiple servers, each with its own index. every
> night with delete events from the index based on a retention policy then
> we optimize. each server takes between 1 and 2 hours to optimize. ideally,
> we would like to optimize more quickly, without compromising the search
> performances. in the lucene in action book, it says "use optimize
> sparingly; use the optimize(maxNumSegments) method instead". what is a
> reasonnable maxNumSegments in my situation?

Maybe try starting with maxNumSegments=10 and iterate from there?

But, are you sure you even need to optimize at all?  Are you hitting
search performance issues if you don't optimize?

Another thing to try is calling .setCalibrateSizeByDeletes(true) on
your LogByteSizeMergePolicy (the default).  This generally causes it
to favor merging away segments with many deletions...


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message