Return-Path: X-Original-To: apmail-lucene-dev-archive@www.apache.org Delivered-To: apmail-lucene-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D881F973A for ; Wed, 9 May 2012 11:08:14 +0000 (UTC) Received: (qmail 56274 invoked by uid 500); 9 May 2012 11:08:13 -0000 Delivered-To: apmail-lucene-dev-archive@lucene.apache.org Received: (qmail 56149 invoked by uid 500); 9 May 2012 11:08:13 -0000 Mailing-List: contact dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@lucene.apache.org Delivered-To: mailing list dev@lucene.apache.org Received: (qmail 56131 invoked by uid 99); 9 May 2012 11:08:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 May 2012 11:08:13 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 May 2012 11:08:10 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id A268843B71E for ; Wed, 9 May 2012 11:07:49 +0000 (UTC) Date: Wed, 9 May 2012 11:07:49 +0000 (UTC) From: "Simon Willnauer (JIRA)" To: dev@lucene.apache.org Message-ID: <774962763.43698.1336561669667.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <624692917.4687.1335434477450.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Commented] (LUCENE-4022) Offline Sorter wrongly uses MIN_BUFFER_SIZE if there is more memory available MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/LUCENE-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271312#comment-13271312 ] Simon Willnauer commented on LUCENE-4022: ----------------------------------------- bq. How did you come up with the 10x factor though? Is it something off the top of your head? I wanted to differentiate between a significantly bigger "unallocated" heap to force a grow if it makes sense so factor 10 seemed to be a good start. I mean this automatic stuff should be a conservative default that gives you reasonable performance. In the first place it should make sure your system is stable and doesn't run into OOM etc. It might seem somewhat arbitrarily. I will add a changes entry and commit this stuff. Seems like robert wants to roll a 3.6.1 soonish ;) > Offline Sorter wrongly uses MIN_BUFFER_SIZE if there is more memory available > ----------------------------------------------------------------------------- > > Key: LUCENE-4022 > URL: https://issues.apache.org/jira/browse/LUCENE-4022 > Project: Lucene - Java > Issue Type: Bug > Components: modules/spellchecker > Affects Versions: 3.6, 4.0 > Reporter: Simon Willnauer > Assignee: Simon Willnauer > Fix For: 4.0, 3.6.1 > > Attachments: LUCENE-4022.patch > > > The Sorter we use for offline sorting seems to use the MIN_BUFFER_SIZE as a upper bound even if there is more memory available. See this snippet: > {code} > long half = free/2; > if (half >= ABSOLUTE_MIN_SORT_BUFFER_SIZE) { > return new BufferSize(Math.min(MIN_BUFFER_SIZE_MB * MB, half)); > } > > // by max mem (heap will grow) > half = (max - total) / 2; > return new BufferSize(Math.min(MIN_BUFFER_SIZE_MB * MB, half)); > {code} > use use use Math.max instead of min here. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional commands, e-mail: dev-help@lucene.apache.org