lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <>
Subject Re: Solr 4.2, reindexing, transaction logs, high memory usage
Date Fri, 22 Mar 2013 18:22:36 GMT
On 3/22/2013 9:24 AM, Raghav Karol wrote:
> We run this index in 8 solr sharded in 8 solr cores on a single host
> an m2.4xlarge EC2 instances. We do not use zookeeper (because of
> operational issues on our live indexes) and manage the sharding
> ourselves.
> For this index we run with -Xmx30G and observe in (jsconsole) that the
> solr runs with approximately 25G.
> Autocommit kills solr, it sends heap memory usage to max and kills
> solr. The reason appears to be committing to all cores in parallel.
> Disabling autoCommit and  running a loop like
>      while(true); do for i in $(seq 0 7); do curl -s
> "http://localhost:8085/solr/core${i}/update?commit=true&wt=json" done
> produces:
> {"responseHeader":{"status":0,"QTime":8297}}
> {"responseHeader":{"status":0,"QTime":8358}}
> {"responseHeader":{"status":0,"QTime":9552}}
> {"responseHeader":{"status":0,"QTime":8368}}
> {"responseHeader":{"status":0,"QTime":9296}}
> {"responseHeader":{"status":0,"QTime":8527}}
> {"responseHeader":{"status":0,"QTime":9458}}
> {"responseHeader":{"status":0,"QTime":8929}}
> 8 seconds to process a commit where with no changes to the index!?!

If this index is actively processing queries, then what you are 
experiencing here is probably cache warming - Solr looks at the entries 
in each of its caches and uses those entries to run queries against the 
new index to pre-populate the new caches.  The number of entries that 
are used for warming queries will be controlled by the autoWarmCount 
value on the cache definition.

> Why does solr need such a large heap space for this index (it dies
> with 10G and 20G and is constant at 28G in jconsole)?
> Why does running a commits in parallel via autoCommit or the command
> exhaust the memory?
> Are we using dynamic fields incorrectly?

When you run a commit, Solr fires up a new index searcher object, 
complete with caches, which will then be autowarmed from the old caches 
as described above.  Until the new object is fully warmed, the old 
searcher will exist and will continue to serve queries.  If you issue 
another commit while a new searcher is already warming, then *another* 
searcher is likely to get fired up as well, depending on the value of 
maxWarmingSearchers in your solrconfig.xml file.

The amount of memory required by a searcher can be very high, due in 
part to caches, especially the FieldCache, which is used internally by 
Lucene and is not configurable like the others.  If you have 8 cores and 
you run commits on them in parallel that take several seconds, then for 
several seconds you will have at least sixteen searchers running.  If 
your maxWarmingSearchers value is higher than 1, you might end up with 
even more searchers running at the same time.  This is likely where your 
memory is going.

By lowering the autoWarmCount values on your caches, you can reduce the 
amount of time it takes to do a commit.  You should also keep track of 
whether anything has actually changed on each core and don't issue a 
commit when nothing has changed.  Also, it would be a good idea to 
stagger the commits so that all your cores are not committing at the 
same time.


View raw message