lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yonik Seeley" <>
Subject Re: SegmentReader using too much memory?
Date Tue, 12 Dec 2006 04:27:22 GMT
On 12/11/06, Eric Jain <> wrote:
> Yonik Seeley wrote:
> > There is no real document boost at the index level... it is simply
> > multiplied into the boost for every field of that document.  So it
> > comes down to what fields you want that index-time boost to take
> > effect on (as well as length normalization).
> Come to think of it, I do have two large indexes that don't really need any
> document boosting, could perhaps save some memory there...
> But what I still don't understand is why the amount of memory that is used
> by SegmentReader.Norm.bytes keeps growing -- at first quite fast to about
> 150mb, then slower.

It's read on demand, per indexed field.
So assuming your index is optimized (a single segment), then it
increases by one byte[] each time you search on a new field.

-Yonik Solr, the open-source Lucene search server

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message