lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From KNitin <nitin.t...@gmail.com>
Subject Re: Solr Heap, MMaps and Garbage Collection
Date Mon, 03 Mar 2014 06:54:14 GMT
Thanks, Walter

Hit rate on the document caches is close to 70-80% and the filter caches
are a 100% hit (since most of our queries filter on the same fields but
have a different q parameter). Query result cache is not of great
importance to me since the hit rate their is almost negligible.

Does it mean i need to increase the size of my filter and document cache
for large indices?

The split up of my 25Gb heap usage is split as follows

1. 19 GB - Old Gen (100% pool utilization)
2.  3 Gb - New Gen (50% pool utilization)
3. 2.8 Gb - Perm Gen (I am guessing this is because of interned strings)
4. Survivor space is in the order of 300-400 MB and is almost always 100%
full.(Is this a major issue?)

We are also currently using Parallel GC collector but planning to move to
CMS for lesser stop-the-world gc times. If i increase the filter cache and
document cache entry sizes, they would also go to the Old gen right?

A very naive question: How does increasing young gen going to help if we
know that solr is already pushing major caches and other objects to old gen
because of their nature? My young gen pool utilization is still well under
50%


Thanks
Nitin


On Sun, Mar 2, 2014 at 9:31 PM, Walter Underwood <wunder@wunderwood.org>wrote:

> An LRU cache will always fill up the old generation. Old objects are
> ejected, and those are usually in the old generation.
>
> Increasing the heap size will not eliminate this. It will make major, stop
> the world collections longer.
>
> Increase the new generation size until the rate of old gen increase slows
> down. Then choose a total heap size to control the frequency (and duration)
> of major collections.
>
> We run with the new generation at about 25% of the heap, so 8GB total and
> a 2GB newgen.
>
> A 512 entry cache is very small for query results or docs. We run with 10K
> or more entries for those. The filter cache size depends on your usage. We
> have only a handful of different filter queries, so a tiny cache is fine.
>
> What is your hit rate on the caches?
>
> wunder
>
> On Mar 2, 2014, at 7:42 PM, KNitin <nitin.tnvl@gmail.com> wrote:
>
> > Hi
> >
> > I have very large index for a few collections and when they are being
> > queried, i see the Old gen space close to 100% Usage all the time. The
> > system becomes extremely slow due to GC activity right after that and it
> > gets into this cycle very often
> >
> > I have given solr close to 30G of heap in a 65 GB ram machine and rest is
> > given to RAm. I have a lot of hits in filter,query result and document
> > caches and the size of all the caches is around 512 entries per
> > collection.Are all the caches used by solr on or off heap ?
> >
> >
> > Given this scenario where GC is the primary bottleneck what is a good
> > recommended memory settings for solr? Should i increase the heap memory
> > (that will only postpone the problem before the heap becomes full again
> > after a while) ? Will memory maps help at all in this scenario?
> >
> >
> > Kindly advise on the best practices
> > Thanks
> > Nitin
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message