lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sohan Kalsariya <>
Subject Re: How are you handling "killer queries" with solr?
Date Wed, 09 Apr 2014 08:57:55 GMT
So what is the issue and the Solution?
Do i need to change any configurations in my solrconfig.xml ?
So I have attached my solrconfig.xml
have a look.

On Wed, Apr 9, 2014 at 1:32 AM, Toke Eskildsen <>wrote:

> Shawn Heisey [] wrote:
> > Are you using the Jetty that comes with Solr, or are you using Jetty
> > from another source?  If you are using Jetty from another source, the
> > maxThreads parameter may not be high enough. I believe the default in a
> > typical Jetty config is 200, but the jetty that comes with Solr has this
> > set to 10000 -- because Solr should not be limited in the number of
> > threads it can create.
> That seems a bit strange to me. Doesn't that make it hard to allocate
> memory resources? If we have a small index (10M documents, 20GB) and do
> faceted searches on a field with 5M unique values, the temporary memory
> overhead for a single search is ~1MB for the bitmap with docIDs and ~20MB
> for the counters for the facet + this & that. Let's just say 25MB. If the
> normal load is a maximum of 10 concurrent searches, then we need 250MB for
> the temporary overhead. Of course there must also be room for the static
> structures, the caches and such, so Let's say 1GB heap minimum and up it to
> 2GB to have room. Problem is that the extra GB is "only" 40 extra
> concurrent searches. If for some reason there is a sudden burst of 6 times
> the normal max (stuff happens), then we get the nasty OOMs.
> Since throughput quickly stops rising when the number of concurrent
> threads passes the number of CPU's and since machines with 100+ CPUs are
> still quite rare, wouldn't it make more sense to keep the 200 threads and
> queue the requests instead? Or even lower the default number of threads to
> guard against the OOM-surprises?
> Thanks,
> Shawn

*Sohan Kalsariya*

View raw message