lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-1526) For near real-time search, use paged copy-on-write BitVector impl
Date Mon, 09 Nov 2009 13:18:32 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12774958#action_12774958
] 

Michael McCandless commented on LUCENE-1526:
--------------------------------------------

{quote}
bq. But, I agree it's wasteful of space when deletes are so sparse... though it is fast.

It's fast for random access, but it's really slow if you need to make a lot of these (either
during heavy indexing if copy-on-write, or during heavy query load if copy-on-reopen).
{quote}

But how many msec does this clone add in practice?

Note that it's only done if there is a new deletion against that
segment.

I do agree it's silly wasteful, but searching should then be faster
than using AccelerateIntSet or MultiBitSet.  It's a tradeoff of the
turnaround time for search perf.

I'd love to see how the worst-case queries (matching millions of hits)
perform with each of these three options.

However, I suspect it's not the clone time that's very costly... I bet
it's the fact that Lucene has to resolve the deletions to docIDs, in
the foreground of reopen, that dominates.  And probably also that
Lucene doesn't yet use RAMDir (but LUCENE-1313 is working towards
fixing that).

Ie, Zoie is "late binding" (filters out UIDs as it encounters them
during searching), while Lucene is "early binding" (immediately
resolves UIDs -> docIDs during reopen).  And because Zoie does the
"resolve deletions to docIDs" in the BG, it's not on any query's
execution path.

How does Zoie resolve UID -> docID, now?  I remember a thread about
this a while back...

Actually one simple fix we could make to Lucene is to resolve
deletions in the foreground, when the deleteDocuments is called.
This'd mean it's the thread that does the updateDocument that pays the
price, rather than a future reopen.  Net/net it's a zero sum game
(just distributed the cost from the reopen to the indexing), but it'd
mean the reopen time is minimized, which is clearly the direction we
want to go in.  I'll open a new issue.

{quote}
bq. So are you using this, only, as your deleted docs? Ie you don't store the deletions with
Lucene? I'm getting confused if this is only for the NRT case, or, in general.

These are only to augment the deleted docs of the disk reader - the disk reader isn't reopened
at all except infrequently - once a batch (a big enough RAMDirectory is filled, or enough
time goes by, depending on configuration) is ready to be flushed to disk, diskReader.addIndexes
is called and when the diskReader is reopened, the deletes live in the normal diskReader's
delete set. Before this time is ready, when there is a batch in ram that hasn't been flushed,
the IntSetAccelerator is applied to the not-reopened diskReader.
{quote}

I think I now understand it (plus John's comment that these are Zoie's
UIDs not Lucene's docIDs, helped)...

When a doc needs to be updated, you index it immediately into the
RAMDir, and reopen the RAMDir's IndexReader.  You add it's UID to the
AcceleratedIntSet, and all searches "and NOT"'d against that set.  You
don't tell Lucene to delete the old doc, yet.

Periodically, in the BG, you use addIndexes to push the RAMDir to
disk, and, on a perhaps separate schedule, you resolve the deleted
UIDs to docIDs and flush them to disk.

One question: does Zoie preserve Lucene's "point in time" searching?
Is a new deletion immediately visible to all past reopened readers?  I
think for Lucene we need to preserve this, so we need a data structure
that can be "efficiently" transactional.  I guess we could consider
allowing an NRT to optionally violate this, in which case we wouldn't
need to do any cloning of the deleted docs.

bq. It's a copy-on-read ThreadLocal.

Hmm -- why does each thread make a full clone of the
AcceleratedBitSet?  Just for thread safety against additions to the
set?  Or is this somehow preserving "point in time"?  And it fully
re-clones whenever new updates have been committed to the RAMDir?

bq. It's actually pretty fantastic performance - check out the zoie perf pages: http://code.google.com/p/zoie/wiki/Performance_Comparisons_for_ZoieLucene24ZoieLucene29LuceneNRT

These are great results!  If I'm reading them right, it looks like
generally you get faster query throughput, and roughly equal indexing
throughput, on upgrading from 2.4 to 2.9?

Zoie also gets much better performance than raw Lucene NRT, but this
test focuses on reopen performance, I think?  Ie, a query reopens the
reader if any new docs were indexed?  If you change that to, say,
reopen once per N seconds, I wonder how the results would compare.

One optimization you could make with Zoie is, if a real-time deletion
(from the AcceleratedIntSet) is in fact hit, it could mark the
corresponding docID, to make subsequent searches a bit faster (and
save the bg CPU when flushing the deletes to Lucene).


> For near real-time search, use paged copy-on-write BitVector impl
> -----------------------------------------------------------------
>
>                 Key: LUCENE-1526
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1526
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>    Affects Versions: 2.4
>            Reporter: Jason Rutherglen
>            Priority: Minor
>         Attachments: LUCENE-1526.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> SegmentReader currently uses a BitVector to represent deleted docs.
> When performing rapid clone (see LUCENE-1314) and delete operations,
> performing a copy on write of the BitVector can become costly because
> the entire underlying byte array must be created and copied. A way to
> make this clone delete process faster is to implement tombstones, a
> term coined by Marvin Humphrey. Tombstones represent new deletions
> plus the incremental deletions from previously reopened readers in
> the current reader. 
> The proposed implementation of tombstones is to accumulate deletions
> into an int array represented as a DocIdSet. With LUCENE-1476,
> SegmentTermDocs iterates over deleted docs using a DocIdSet rather
> than accessing the BitVector by calling get. This allows a BitVector
> and a set of tombstones to by ANDed together as the current reader's
> delete docs. 
> A tombstone merge policy needs to be defined to determine when to
> merge tombstone DocIdSets into a new deleted docs BitVector as too
> many tombstones would eventually be detrimental to performance. A
> probable implementation will merge tombstones based on the number of
> tombstones and the total number of documents in the tombstones. The
> merge policy may be set in the clone/reopen methods or on the
> IndexReader. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message