lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Jain <Eric.J...@isb-sib.ch>
Subject Re: SegmentReader using too much memory?
Date Tue, 12 Dec 2006 02:33:26 GMT
Yonik Seeley wrote:
> On 12/11/06, Eric Jain <Eric.Jain@isb-sib.ch> wrote:
>> I've noticed that after stress-testing my application (uses Lucene 
>> 2.0) for
>> I while, I have almost 200mb of byte[]s hanging around, the top two
>> culprits being:
>>
>> 24 x SegmentReader.Norm.bytes = 112mb
>>   2 x SegmentReader.ones       =  16mb
> 
> Each indexed field has a norm array that is the product of it's
> index-time boost and the length normalization factor.  If you don't
> need either, you can omit the norms (as it looks like you already have
> on some fields given that "ones" is the fake norms used in place of
> the "real" norms).

Thanks for the explanation.

Not sure where the fields without norms come from: I use neither 
Field.setOmitNorms nor Index.NO_NORMS anywhere!

I do want to use document boosting... Is that independent from field 
boosting? The length normalization on the other hand may not be necessary.

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message