lucenenet-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nicholas Paldino <casper...@caspershouse.com>
Subject RE: does FieldCache play well with GC large object heap?
Date Sun, 14 Sep 2014 03:20:43 GMT
Jon,

	If you're using .NET 4.5.1 and identify the LOH as an area of concern, you can force a compaction
of the LOH by using the LargeObjectHeapCompactionMode property on the GCSettings class:

http://msdn.microsoft.com/en-us/library/system.runtime.gcsettings.largeobjectheapcompactionmode(v=vs.110).aspx

		- Nick

-----Original Message-----
From: Jonathan Resnick [mailto:jresnick@gmail.com] 
Sent: Saturday, September 13, 2014 10:34 PM
To: user@lucenenet.apache.org
Subject: does FieldCache play well with GC large object heap?

Hi,

I'm relatively new to Lucene.net.  We've recently integrated it into our application to provide
search functionality, and at this point I'm considering introducing some custom scoring that
will require use of the FieldCache. Up until now I've been a little wary of making use of
the fieldcache because I know that it creates huge arrays, and my concern is whether this
is going to create issues for the GC - specifically wrt fragmentation of the large object
heap.

For example, if we have ~10M documents, an int field cache will require 40MB of contiguous
memory, which will be allocated on the large object heap. If we're opening new IndexReaders
1000s of times per day (because we're adding/updating documents), then we're asking the GC
to be continually allocating and discarding these 40MB arrays. Since the large object heap
does not get compacted, and since the array size likely needs to grow a bit each time (due
to new docs added), it seems this would lead to fragmentation and eventual out-of-memory conditions.
 Is this an issue in practice?

If anyone with more Lucene.net experience could share some insight here, it would be much
appreciated.

-Jon
Mime
View raw message