lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <apa...@elyograg.org>
Subject Re: Planning and benchmarking Solr: resource consumption (RAM, disk, CPU, number of nodes)
Date Wed, 11 May 2016 23:05:16 GMT
On 5/11/2016 6:06 AM, Horváth Péter Gergely wrote:
> If there is no such research document available, I would be much obliged if
> you could give some hints on what and how to measure in Solr / Solr cloud
> world. (E.g. what the optimal resource utilization of a Solr instance is,
> how to recognize if an instance is trashing etc.)

I don't know if you've seen this:

https://lucidworks.com/blog/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

There quite simply is no general answer.  Scalability rarely follows a
predictable curve based on the amount of hardware you use ... and what
I've frequently found is that a given Solr install will perform *great*
until some magic unknown threshold is reached, and then suddenly it's
like somebody installed an analog modem in place of your network card. 
If you Google "performance curve knee" you will find some information on
this phenomenon.

The only way to know exactly how Solr will behave under a given workload
is to set up the system and see what happens.  After somebody gets
enough experience with Solr, they can take a look at details for a
specific install and *maybe* predict whether it will handle the load or
not ... but I've frequently been wrong (in both directions) when trying
to make that assessment.

Thanks,
Shawn


Mime
View raw message