lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lance Norskog" <goks...@gmail.com>
Subject RE: Memory improvements
Date Fri, 08 Feb 2008 04:01:19 GMT
Solr 1.2 has a bug where if you say "commit after N documents" it does not.
But it does honor the "commit after N milliseconds" directive. 

This is fixed in Solr 1.3. 

-----Original Message-----
From: Sundar Sankaranarayanan [mailto:Sundar.Sankaranarayanan@phoenix.edu] 
Sent: Thursday, February 07, 2008 3:30 PM
To: solr-user@lucene.apache.org
Subject: Memory improvements

Hi All,
          I am running an application in which I am having to index about
300,000 records of a table which has 6 columns. I am committing to the solr
server after every 10,000 rows and I observed that the by the end of about
150,000 the process eats up about 1 Gig of memory, and since my server has
only 1 Gig it throws me an Out of Memory error. How ever if I commit after
every 1000 rows, it is able to process about 200,000 rows before throwing
out of memory. This is just dev server and the production data would be much
more bigger. It will be great if someone suggests a way to improve this
scenario.
 
 
Regards
Sundar Sankarnarayanan


Mime
View raw message