lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otis Gospodnetic <otis.gospodne...@gmail.com>
Subject Re: Why is SolrCloud doing a full copy of the index?
Date Mon, 06 May 2013 20:53:25 GMT
Hi,

I just looked at SPM monitoring we have for Solr servers that run
search-lucene.com.  One of them has 1-2 collections/minute.  Another
one closer to 10.  These are both small servers with small JVM heaps.
Here is a graph of one of them:

https://apps.sematext.com/spm/s/104ppwguao

Just looked at some other Java servers we have running, not Solr, and
I see close to 60 small collections per minute.

So these numbers will vary a lot depending on the heap size and other
JVM settings, as well as the actual code/usage. :)

Otis
--
Solr & ElasticSearch Support
http://sematext.com/





On Mon, May 6, 2013 at 4:39 PM, Shawn Heisey <solr@elyograg.org> wrote:
> On 5/6/2013 1:39 PM, Michael Della Bitta wrote:
>>
>> Hi Shawn,
>>
>> Thanks a lot for this entry!
>>
>> I'm wondering, when you say "Garbage collections that happen more often
>> than ten or so times per minute may be an indication that the heap size is
>> too small," do you mean *any* collections, or just full collections?
>
>
> My gut reaction is any collection, but in extremely busy environments a rate
> of ten per minute might be a very slow day on a setup that's working
> perfectly.
>
> As I wrote that particular bit, I was thinking that any number I put there
> was probably wrong for some large subset of users, but I wanted to finish
> putting down my thoughts and improve it later.
>
> Thanks,
> Shawn
>

Mime
View raw message