Return-Path: Delivered-To: apmail-lucene-dev-archive@www.apache.org Received: (qmail 84606 invoked from network); 17 Nov 2010 15:41:09 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 17 Nov 2010 15:41:09 -0000 Received: (qmail 24657 invoked by uid 500); 17 Nov 2010 15:41:39 -0000 Delivered-To: apmail-lucene-dev-archive@lucene.apache.org Received: (qmail 24467 invoked by uid 500); 17 Nov 2010 15:41:39 -0000 Mailing-List: contact dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@lucene.apache.org Delivered-To: mailing list dev@lucene.apache.org Received: (qmail 24448 invoked by uid 99); 17 Nov 2010 15:41:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Nov 2010 15:41:38 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Nov 2010 15:41:37 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id oAHFfHue002089 for ; Wed, 17 Nov 2010 15:41:17 GMT Message-ID: <8684086.150061290008477119.JavaMail.jira@thor> Date: Wed, 17 Nov 2010 10:41:17 -0500 (EST) From: "Jason Rutherglen (JIRA)" To: dev@lucene.apache.org Subject: [jira] Commented: (LUCENE-2680) Improve how IndexWriter flushes deletes against existing segments MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/LUCENE-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12932988#action_12932988 ] Jason Rutherglen commented on LUCENE-2680: ------------------------------------------ {quote}Why do we still have deletesFlushed? And why do we still need to remap docIDs on merge? I thought with this new approach the docIDUpto for each buffered delete Term/Query would be a local docID to that segment?{quote} Deletes flushed can be removed if we store the docid-upto per segment. Then we'll go back to having a hash map of deletes. {quote}The SegmentDeletes use less than BYTES_PER_DEL_TERM because it's a simple HashSet not a HashMap? Ie we are over-counting RAM used now? (Same for by query){quote} Intuitively, yes, however here's the constructor of hash set: {code} public HashSet() { map = new HashMap(); } {code} bq. why are we tracking the last segment info/index? I thought last segment was supposed to be used to mark the last segment of a commit/flush. This way we save on the hash(set,map) space on the segments upto the last segment when the commit occurred. {quote}Can we store segment's deletes elsewhere?{quote} We can, however I had to minimize places in the code that were potentially causing errors (trying to reduce the problem set, which helped locate the intermittent exceptions), syncing segment infos with the per-segment deletes was one was one of those places. That and I thought it'd be worth a try simplify (at the expense of breaking the unstated intention of the SI class). {quote}Do we really need to track appliedTerms/appliedQueries? Ie is this just an optimization so that if the caller deletes by the Term/Query again we know to skip it? {quote} Yes to the 2nd question. Why would we want to try deleting multiple times? The cost is the terms dictionary lookup which you're saying is in the noise? I think potentially cracking open a query again could be costly in cases where the query is indeed expensive. {quote}not iterate through the terms/queries to subtract the RAM used?{quote} Well, the RAM usage tracking can't be completely defined until we finish how we're storing the terms/queries. > Improve how IndexWriter flushes deletes against existing segments > ----------------------------------------------------------------- > > Key: LUCENE-2680 > URL: https://issues.apache.org/jira/browse/LUCENE-2680 > Project: Lucene - Java > Issue Type: Improvement > Reporter: Michael McCandless > Fix For: 4.0 > > Attachments: LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch, LUCENE-2680.patch > > > IndexWriter buffers up all deletes (by Term and Query) and only > applies them if 1) commit or NRT getReader() is called, or 2) a merge > is about to kickoff. > We do this because, for a large index, it's very costly to open a > SegmentReader for every segment in the index. So we defer as long as > we can. We do it just before merge so that the merge can eliminate > the deleted docs. > But, most merges are small, yet in a big index we apply deletes to all > of the segments, which is really very wasteful. > Instead, we should only apply the buffered deletes to the segments > that are about to be merged, and keep the buffer around for the > remaining segments. > I think it's not so hard to do; we'd have to have generations of > pending deletions, because the newly merged segment doesn't need the > same buffered deletions applied again. So every time a merge kicks > off, we pinch off the current set of buffered deletions, open a new > set (the next generation), and record which segment was created as of > which generation. > This should be a very sizable gain for large indices that mix > deletes, though, less so in flex since opening the terms index is much > faster. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional commands, e-mail: dev-help@lucene.apache.org