lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Rutherglen (JIRA)" <>
Subject [jira] Commented: (LUCENE-2680) Improve how IndexWriter flushes deletes against existing segments
Date Wed, 17 Nov 2010 15:41:17 GMT


Jason Rutherglen commented on LUCENE-2680:

{quote}Why do we still have deletesFlushed? And why do we still need to
remap docIDs on merge? I thought with this new approach the docIDUpto for
each buffered delete Term/Query would be a local docID to that

Deletes flushed can be removed if we store the docid-upto per segment.
Then we'll go back to having a hash map of deletes. 

{quote}The SegmentDeletes use less than BYTES_PER_DEL_TERM because it's a
simple HashSet not a HashMap? Ie we are over-counting RAM used now? (Same
for by query){quote}

Intuitively, yes, however here's the constructor of hash set:

{code} public HashSet() { map = new HashMap<E,Object>(); } {code}

bq. why are we tracking the last segment info/index?

I thought last segment was supposed to be used to mark the last segment of
a commit/flush. This way we save on the hash(set,map) space on the
segments upto the last segment when the commit occurred.

{quote}Can we store segment's deletes elsewhere?{quote}

We can, however I had to minimize places in the code that were potentially
causing errors (trying to reduce the problem set, which helped locate the
intermittent exceptions), syncing segment infos with the per-segment
deletes was one was one of those places. That and I thought it'd be worth
a try simplify (at the expense of breaking the unstated intention of the
SI class).

{quote}Do we really need to track appliedTerms/appliedQueries? Ie is this
just an optimization so that if the caller deletes by the Term/Query again
we know to skip it? {quote}

Yes to the 2nd question. Why would we want to try deleting multiple times?
The cost is the terms dictionary lookup which you're saying is in the
noise? I think potentially cracking open a query again could be costly in
cases where the query is indeed expensive.

{quote}not iterate through the terms/queries to subtract the RAM

Well, the RAM usage tracking can't be completely defined until we finish
how we're storing the terms/queries. 

> Improve how IndexWriter flushes deletes against existing segments
> -----------------------------------------------------------------
>                 Key: LUCENE-2680
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>            Reporter: Michael McCandless
>             Fix For: 4.0
>         Attachments: LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch,
LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch
> IndexWriter buffers up all deletes (by Term and Query) and only
> applies them if 1) commit or NRT getReader() is called, or 2) a merge
> is about to kickoff.
> We do this because, for a large index, it's very costly to open a
> SegmentReader for every segment in the index.  So we defer as long as
> we can.  We do it just before merge so that the merge can eliminate
> the deleted docs.
> But, most merges are small, yet in a big index we apply deletes to all
> of the segments, which is really very wasteful.
> Instead, we should only apply the buffered deletes to the segments
> that are about to be merged, and keep the buffer around for the
> remaining segments.
> I think it's not so hard to do; we'd have to have generations of
> pending deletions, because the newly merged segment doesn't need the
> same buffered deletions applied again.  So every time a merge kicks
> off, we pinch off the current set of buffered deletions, open a new
> set (the next generation), and record which segment was created as of
> which generation.
> This should be a very sizable gain for large indices that mix
> deletes, though, less so in flex since opening the terms index is much
> faster.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message