lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otis Gospodnetic <otis_gospodne...@yahoo.com>
Subject Re: SegmentReader using too much memory?
Date Tue, 12 Dec 2006 03:41:58 GMT
Eric, you said you aren't using any Field.Index.NO_NORMS fields, but SegmentReader.ones should
only be used if you do use NO_NORMS, so things don't add up here.

Otis

----- Original Message ----
From: Yonik Seeley <yonik@apache.org>
To: java-user@lucene.apache.org
Sent: Monday, December 11, 2006 8:53:15 PM
Subject: Re: SegmentReader using too much memory?

On 12/11/06, Eric Jain <Eric.Jain@isb-sib.ch> wrote:
> I've noticed that after stress-testing my application (uses Lucene 2.0) for
> I while, I have almost 200mb of byte[]s hanging around, the top two
> culprits being:
>
> 24 x SegmentReader.Norm.bytes = 112mb
>   2 x SegmentReader.ones       =  16mb

Each indexed field has a norm array that is the product of it's
index-time boost and the length normalization factor.  If you don't
need either, you can omit the norms (as it looks like you already have
on some fields given that "ones" is the fake norms used in place of
the "real" norms).

-Yonik
http://incubator.apache.org/solr Solr, the open-source Lucene search server

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org





---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message