lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <apa...@elyograg.org>
Subject Re: Config for massive inserts into Solr master
Date Sun, 09 Oct 2016 19:29:12 GMT
On 10/9/2016 12:33 PM, Reinhard Budenstecher wrote:
> We have an ETL process which updates product catalog. This produces massive inserts on
MASTER, but there are no reads. Often there are thousands and hundreds of thousands of records
per minute that where inserted. But sometimes I get a an OOM error, the only log entry I can
find is:
>
> 2016-10-09T16:17:34.440+0200: 63872,249: [Full GC (Allocation Failure) 2016-10-09T16:17:34.440+0200:
63872,249: [CMS: 16387099K->16387087K(16777216K), 4,2227778 secs] 17782619K->17782606K(30758272K),
[Metaspace: 36452K->36452K(38912K)], 4,2229287 secs] [Times: user=4,22 sys=0,01, real=4,22
secs]
>
> As I'm a bit lost with this all: is there anybody who can help me with best config for
massive inserts on MASTER and massive reads on SLAVE. Is there a common approach? What details
should I provide furthermore? Or is the simplest solution to raise heap on MASTER from 32GB
of the available 64GB to a higher value?

What version of Solr?  How has it been installed and started?

Is this a single index core with 150 million docs and 140GB index
directory size, or is that the sum total of all the indexes on the machine?

It seems unlikely to me that you would see OOM errors when indexing with
a 32GB heap and no queries.  You might try dropping the max heap to 31GB
instead of 32GB, so your Java pointer sizes are cut in half.  You might
actually see a net increase in the amount of memory that Solr can
utilize with that change.

Whether the errors continue or not, can you copy the full error from
your log with stacktrace(s) so we can see it?

Thanks,
Shawn


Mime
View raw message