lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley" <yo...@apache.org>
Subject Re: SegmentReader using too much memory?
Date Tue, 12 Dec 2006 04:27:22 GMT
On 12/11/06, Eric Jain <Eric.Jain@isb-sib.ch> wrote:
> Yonik Seeley wrote:
> > There is no real document boost at the index level... it is simply
> > multiplied into the boost for every field of that document.  So it
> > comes down to what fields you want that index-time boost to take
> > effect on (as well as length normalization).
>
> Come to think of it, I do have two large indexes that don't really need any
> document boosting, could perhaps save some memory there...
>
> But what I still don't understand is why the amount of memory that is used
> by SegmentReader.Norm.bytes keeps growing -- at first quite fast to about
> 150mb, then slower.

It's read on demand, per indexed field.
So assuming your index is optimized (a single segment), then it
increases by one byte[] each time you search on a new field.

-Yonik
http://incubator.apache.org/solr Solr, the open-source Lucene search server

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message