lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Martin Grotzke <>
Subject Re: Use terracotta bigmemory for solr-caches
Date Wed, 26 Jan 2011 11:03:29 GMT
On Tue, Jan 25, 2011 at 4:19 PM, Em <> wrote:

> Hi Martin,
> are you sure that your GC is well tuned?
This are the heap related jvm configurations for the servers running with
17GB heap size (one with parallel collector, one with CMS):

-XX:+HeapDumpOnOutOfMemoryError -server -Xmx17G -XX:MaxPermSize=256m
-XX:NewSize=2G -XX:MaxNewSize=2G -XX:SurvivorRatio=6

-XX:+HeapDumpOnOutOfMemoryError -server -Xmx17G -XX:MaxPermSize=256m
-XX:NewSize=2G -XX:MaxNewSize=2G -XX:SurvivorRatio=6 -XX:+UseParallelOldGC

Another heap configuration is running with 8GB max heap, and this search
server also has lower peaks in response times.

To me it seems that it's just too much memory that gets
allocated/collected/compacted. I'm just checking out how far we can reduce
cache sizes (and the max heap) without any reduction of response times (and
disk I/O). Right now it seems that a reduction of the documentCache size
indeed does reduce the hitratio of the cache, but it does not have any
negative impact on response times (neither is I/O increased). Therefore I'd
follow the path of reducing the cache sizes as far as we can as long as
there are no negative impacts and then I'd check again the longest requests
and see if they're still caused by full GC cycles. Even then they should be
much shorter due to the reduced memory that is collected/compacted.

So now I also think, the terracotta bigmemory is not the right solution :-)


> A request that needs more than a minute isn't the standard, even when I
> consider all the other postings about response-performance...
> Regards
> --
> View this message in context:
> Sent from the Solr - User mailing list archive at

Martin Grotzke

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message