lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley" <yo...@apache.org>
Subject Re: Concurrent merge
Date Wed, 21 Feb 2007 18:12:09 GMT
On 2/21/07, Ning Li <ning.li.li@gmail.com> wrote:
> I agree that the current blocking model works for some applications,
> especially if the indexes are batch built.
>
> But other applications, e.g. with online indexes, would greatly
> benefit from a non-blocking model. Most systems that merge data
> support background merges. As long as we keep it simple (how about the
> original proposal?), applications will benefit from this.

Yes, if we do anything, I think simple is better.  I wouldn't go down
the whole soft-limit/hard-limit, gradually slowing down additions
road... the complexity doesn't sound worth it.

The simplest model could just take a reference to the ram segments and
the deleted terms on a flush, and then another thread could then do
merging into a single segment, term deletion, and any other necessary
merging, while the original thread created new empty ram-segments and
deleted-terms so additions could continue.  If the user added more
than maxBufferedDocs before the background merge was complete, the add
would block (as it does currently).

-Yonik

---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message