lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-2324) Per thread DocumentsWriters that write their own private segments
Date Thu, 15 Apr 2010 16:08:53 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12857373#action_12857373
] 

Michael McCandless commented on LUCENE-2324:
--------------------------------------------

bq. The usual design is a queued ingestion pipeline, where a pool of indexer threads take
docs out of a queue and feed them to an IndexWriter, I think?

bq. Mainly, because I think apps with such an affinity that you describe are very rare?

Hmm I suspect it's not that rare....  yes one design is a single
indexing queue w/ dedicated thread pool only for indexing, but a push
model is equal valid, where your app already has separate threads (or
thread pools) servicing different content sources, so when a doc
arrives to one of those source-specific threads, it's that thread that
indexes it, rather than handing off to a separately pool.

Lucene is used in a very wide variety of apps -- we shouldn't optimize
the indexer on such hard app specific assumptions.

bq. And if a user really has so different docs, maybe the right answer would be to have more
than one single index?

Hmm but the app shouldn't have to resort to this... (it doesn't have
to today).

But... could we allow an add/updateDocument call to express this
affinity, explicitly?  If you index homogenous docs you wouldn't use
it, but, if you index drastically different docs that fall into clear
"categories", expressing the affinity can get you a good gain in
indexing throughput.

This may be the best solution, since then one could pass the affinity
even through a thread pool, and then we would fallback to thread
binding if the document class wasn't declared?

I mean this is virtually identical to "having more than one index",
since the DW is like its own index.  It just saves some of the
copy-back/merge cost of addIndexes...

bq. Even if today an app utilizes the thread affinity, this only results in maybe somewhat
faster indexing performance, but the benefits would be lost after flusing/merging.

Yes this optimization is only about the initial flush, but, it's
potentially sizable.  Merging matters less since typically it's not
the bottleneck (happens in the BG, quickly enough).

On the right apps, thread affinity can make a huge difference.  EG if
you allow up to 8 thread states, and the threads are indexing content
w/ highly divergent terms (eg, one language per thread, or, docs w/
very different field names), in the worst case you'll be up to 1/8 as
efficient since each term must now be copied in up to 8 places
instead of one.  We have a high per-term RAM cost (reduced thanks to
the parallel arrays, but, still high).

bq. If we assign docs randomly to available DocumentsWriterPerThreads, then we should on average
make good use of the overall memory?

It really depends on the app -- if the term space is highly thread
dependent (above examples) you an end up flush much more frequently for
a given RAM buffer.

bq. Alternatively we could also select the DWPT from the pool of available DWPTs that has
the highest amount of free memory?

Hmm... this would be kinda costly binder?  You'd need a pqueue?
Thread affinity (or the explicit affinity) is a single
map/array/member lookup.  But it's an interesting idea...

bq. If you do have a global RAM management, how would the flushing work? E.g. when a global
flush is triggered because all RAM is consumed, and we pick the DWPT with the highest amount
of allocated memory for flushing, what will the other DWPTs do during that flush? Wouldn't
we have to pause the other DWPTs to make sure we don't exceed the maxRAMBufferSize?

The other DWs would keep indexing :)  That's the beauty of this
approach... a flush of one DW doesn't stop all other DWs from
indexing, unliked today.

And you want to serialize the flushing right?  Ie, only one DW flushes
at a time (the others keep indexing).

Hmm I suppose flushing more than one should be allowed (OS/IO have
alot of concurrency, esp since IO goes into write cache)... perhaps
that's the best way to balance index vs flush time?  EG we pick one to
flush @ 90%, if we cross 95% we pick another to flush, another at
100%, etc.

bq. Of course we could say "always flush when 90% of the overall memory is consumed", but
how would we know that the remaining 10% won't fill up during the time the flush takes?

Regardless of the approach for document -> DW binding, this is an
issue (ie it's non-differentiating here)?  Ie the other DWs continue
to consume RAM while one DW is flushing.  I think the low/high water
mark is an OK solution here?  Or the tiered flushing (I think I like
that better :) ).

bq. Having a fully decoupled memory management is compelling I think, mainly because it makes
everything so much simpler. A DWPT could decide itself when it's time to flush, and the other
ones can keep going independently.

I'm all for simplifying things, which you've already nicely done here,
but not of it's at the cost of a non-trivial potential indexing perf
loss.  We're already taking a perf hit here, since the doc stores
can't be shared... I think that case is justifiable (good
simplification).


> Per thread DocumentsWriters that write their own private segments
> -----------------------------------------------------------------
>
>                 Key: LUCENE-2324
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2324
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>            Reporter: Michael Busch
>            Assignee: Michael Busch
>            Priority: Minor
>             Fix For: 3.1
>
>         Attachments: lucene-2324.patch, LUCENE-2324.patch
>
>
> See LUCENE-2293 for motivation and more details.
> I'm copying here Mike's summary he posted on 2293:
> Change the approach for how we buffer in RAM to a more isolated
> approach, whereby IW has N fully independent RAM segments
> in-process and when a doc needs to be indexed it's added to one of
> them. Each segment would also write its own doc stores and
> "normal" segment merging (not the inefficient merge we now do on
> flush) would merge them. This should be a good simplification in
> the chain (eg maybe we can remove the *PerThread classes). The
> segments can flush independently, letting us make much better
> concurrent use of IO & CPU.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message