lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erick Erickson <erickerick...@gmail.com>
Subject Re: Solr caching the index file make server refuse serving
Date Fri, 25 Aug 2017 02:55:50 GMT
10 billion documents on 12 cores is over 800M documents/shard at best.
This is _very_ aggressive for a shard. Could you give more information
about your setup?

I've seen 250M docs fit in 12G memory. I've also seen 10M documents
strain 32G of memory. Details matter a lot. The only way I've been
able to determine what a reasonable number of docs with my queries on
my data is to do "the sizing exercise", which I've outlined here:

https://lucidworks.com/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

While this was written over 5 years ago, it's still accurate.

Best,
Erick

On Thu, Aug 24, 2017 at 6:10 PM, 陈永龙 <cyl@gfire.cn> wrote:
> Hello,
>
> ENV:  solrcloud 6.3
>
> 3*dell server
>
> 128G 12cores 4.3T /server
>
> 3 solr node /server
>
> 20G /node (with parameter –m 20G)
>
> 10 billlion documents totle
>
> Problem:
>
>          When we start solrcloud ,the cached index will make memory 98% or
> more used . And if we continue to index document (batch commit 10 000
> documents),one or more server will refuse serving.Cannot login wia ssh,even
> refuse the monitor.
>
> So,how can I limit the solr’s caching index to memory behavior?
>
> Anyone thanks!
>

Mime
View raw message