lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Toke Eskildsen ...@statsbiblioteket.dk>
Subject Re: Best practices for Solr (how to update jar files safely)
Date Sat, 20 Feb 2016 14:55:09 GMT
Shawn Heisey <apache@elyograg.org> wrote:
> I've updated the "Taking Solr to Production" reference guide page with
> what I feel is an appropriate caution against running multiple instances
> in a typical installation.  I'd actually like to use stronger language,

And I would like you to use softer language.

Machines gets bigger all the time and as you state yourself, GC can (easily) be a problem
with the heap grows. With reference to the 32GB JVM limit for small pointers, a max Xmx just
below 32GB looks like a practical choice for a Solr installation (if possible of course):
Running 2 instances of 31GB will provide more usable memory than a single instance of 64GB.
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/

Caveat: I have not done any testing on this with Solr, so I do not know how large the effect
is. Some things, such as String faceting, DocValues structures and some of the field caches
are array-of-atomics oriented and will not suffer with larger pointers. Other things, such
as numerics faceting, large rows-settings and grouping uses a lot of objects and will require
more memory. The overhead will differ depending on usage.

We tend to use separate Solr installations on the same machines. For some machines we do it
to allow for independent upgrades (long story), for others because a heap of 200GB is not
something we are ready to experiment with.

- Toke Eskildsen

Mime
View raw message