lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Klaas <>
Subject Re: Memory improvements
Date Sat, 09 Feb 2008 05:07:29 GMT
On 7-Feb-08, at 3:29 PM, Sundar Sankaranarayanan wrote:

> Hi All,
>           I am running an application in which I am having to index
> about 300,000 records of a table which has 6 columns. I am  
> committing to
> the solr server after every 10,000 rows and I observed that the by the
> end of about 150,000 the process eats up about 1 Gig of memory, and
> since my server has only 1 Gig it throws me an Out of Memory error.  
> How
> ever if I commit after every 1000 rows, it is able to process about
> 200,000 rows before throwing out of memory. This is just dev server  
> and
> the production data would be much more bigger. It will be great if
> someone suggests a way to improve this scenario.

Try reducing maxBufferedDocs to something smaller.  Memory  
consumption is more directly affected by that (and the associated  
size of your documents) than commit frequency.  Setting  
maxPendingDeletes lower is better than committing frequently, too.


View raw message