lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <>
Subject Re: Solr Heap Dump: Any suggestions on what to look for?
Date Thu, 09 Feb 2017 16:00:53 GMT
On 2/9/2017 6:19 AM, Kelly, Frank wrote:
> Got a heap dump on an Out of Memory error.
> Analyzing the dump now in Visual VM
> Seeing a lot of byte[] arrays (77% of our 8GB Heap) in
>   * TreeMap$Entry
>   * FieldCacheImpl$SortedDocValues
> We’re considering switch over to DocValues but would rather be
> definitive about the root cause before we experiment with DocValues
> and require a reindex of our 200M document index 
> In each of our 4 data centers.
> Any suggestions on what I should look for in this heap dump to get a
> definitive root cause?

Analyzing the cause of large memory allocations when the large
allocations are byte[] arrays might mean that it's a low-level class,
probably in Lucene.  Solr will likely have almost no influence on these
memory allocations, except by changing the schema to enable docValues,
which changes the particular Lucene code that is called.  Note that
wiping the index and rebuilding it from scratch is necessary when you
enable docValues.

Another possible source of problems like this is the filterCache.  A 200
million document index (assuming it's all on the same machine) results
in filterCache entries that are 25 million bytes each.  In Solr
examples, the filterCache defaults to a size of 512.  If a cache that
size on a 200 million document index fills up, it will require nearly 13
gigabytes of heap memory.


View raw message