lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Walter Underwood <>
Subject Re: What is the bottleneck for an optimise operation?
Date Thu, 02 Mar 2017 17:28:23 GMT
6.4.0 added a lot of metrics to low-level calls. That makes many operations slow. Go back to
6.3.0 or wait for 6.4.2.

Meanwhile, stop running optimize. You almost certainly don’t need it.

24 GB is a huge heap. Do you really need that? We run a 15 million doc index with an 8 GB
heap (Java 8u121, G1 collector). I recommend a smaller heap so the OS can use that RAM to
cache file buffers.

Walter Underwood  (my blog)

> On Mar 2, 2017, at 7:04 AM, Caruana, Matthew <> wrote:
> I’m currently performing an optimise operation on a ~190GB index with about 4 million
documents. The process has been running for hours.
> This is surprising, because the machine is an EC2 r4.xlarge with four cores and 30GB
of RAM, 24GB of which is allocated to the JVM.
> The load average has been steady at about 1.3. Memory usage is 25% or less the whole
time. iostat reports ~6% util.
> What gives?
> Running Solr 6.4.1.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message