lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "John Wang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-1634) LogMergePolicy should use the number of deleted docs when deciding which segments to merge
Date Wed, 13 May 2009 16:29:46 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708995#action_12708995
] 

John Wang commented on LUCENE-1634:
-----------------------------------

The current lucene implementation, optimize(int) selects segments to merge based on the file
size of the segment file: say the index has 10 segments, and optmize(6) is called,  Lucene
finds 4 smallest segments by number of bytes in the segment files. 

This selection criteria is flawed because you can have a very large segment in terms of bytes
but very small in terms of numDocs (if many deleted docs). Having these segment files around
impacts performance considerably. 

This is what this patch is trying to fix this in a non-intrusive manner by extending the LogMergePolicy
and by normalizing the calculation of the segment size including the delete count.


> LogMergePolicy should use the number of deleted docs when deciding which segments to
merge
> ------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-1634
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1634
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>            Reporter: Yasuhiro Matsuda
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 2.9
>
>         Attachments: LUCENE-1634.patch
>
>
> I found that IndexWriter.optimize(int) method does not pick up large segments with a
lot of deletes even when most of the docs are deleted. And the existence of such segments
affected the query performance significantly.
> I created an index with 1 million docs, then went over all docs and updated a few thousand
at a time.  I ran optimize(20) occasionally. What saw were large segments with most of docs
deleted. Although these segments did not have valid docs they remained in the directory for
a very long time until more segments with comparable or bigger sizes were created.
> This is because LogMergePolicy.findMergeForOptimize uses the size of segments but does
not take the number of deleted documents into consideration when it decides which segments
to merge. So, a simple fix is to use the delete count to calibrate the segment size. I can
create a patch for this.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message