lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael McCandless (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-845) If you "flush by RAM usage" then IndexWriter may over-merge
Date Fri, 23 Mar 2007 15:27:32 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12483631
] 

Michael McCandless commented on LUCENE-845:
-------------------------------------------

This bug is actually rather serious.

If you set maxBufferedDocs to a very large number (on the expectation
that it's not used since you will manually flush by RAM usage) then
the merge policy will always merge the index down to 1 segment as soon
as it hits mergeFactor segments.

This will be an O(N^2) slowdown.  EG if based on RAM you are
flushing every 100 docs, then at 1000 docs you will merge to 1
segment.  Then at 1900 docs, you merge to 1 segment again.  At 2800,
3700, 4600, ... (every 900 docs) you keep merging to 1 segment.  Your
indexing process will get very slow because every 900 documents the
entire index is effectively being optimized.

With LUCENE-843 I'm thinking we should deprecate maxBufferedDocs
entirely and switch to flushing by RAM usage instead (you can always
manually flush every N documents in your app if for some reason you
need that).  But obviously we need to resolve this bug first.


> If you "flush by RAM usage" then IndexWriter may over-merge
> -----------------------------------------------------------
>
>                 Key: LUCENE-845
>                 URL: https://issues.apache.org/jira/browse/LUCENE-845
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Index
>    Affects Versions: 2.1
>            Reporter: Michael McCandless
>         Assigned To: Michael McCandless
>            Priority: Minor
>
> I think a good way to maximize performance of Lucene's indexing for a
> given amount of RAM is to flush (writer.flush()) the added documents
> whenever the RAM usage (writer.ramSizeInBytes()) has crossed the max
> RAM you can afford.
> But, this can confuse the merge policy and cause over-merging, unless
> you set maxBufferedDocs properly.
> This is because the merge policy looks at the current maxBufferedDocs
> to figure out which segments are level 0 (first flushed) or level 1
> (merged from <mergeFactor> level 0 segments).
> I'm not sure how to fix this.  Maybe we can look at net size (bytes)
> of a segment and "infer" level from this?  Still we would have to be
> resilient to the application suddenly increasing the RAM allowed.
> The good news is to workaround this bug I think you just need to
> ensure that your maxBufferedDocs is less than mergeFactor *
> typical-number-of-docs-flushed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message