lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Darrell Burgan <>
Subject RE: Solr 4.3.1 memory swapping
Date Thu, 27 Mar 2014 04:26:40 GMT
Okay well it didn't take long for the swapping to start happening on one of our nodes.  Here
is a screen shot of the Solr console:

And here is a shot of top, with processes sorted by VIRT:

As shown, we have used up more than 25% of the swap space, over 1GB, even though there is
16GB of OS RAM available, and the Solr JVM has been allocated only 10GB. Further, we're only
consuming 1.5/4GB of the 10GB of JVM heap.

Top shows that the Solr process 21582 is using 2.4GB resident but has a virtual size of 82.4GB.
Presumably that virtual size is due to the memory mapped file. The other Java process 27619
is Zookeeper.

So my question remains - why did we use any swap space at all? Doesn't seem like we're experiencing
memory pressure at the moment ... I'm confused.  :-)


-----Original Message-----
From: Darrell Burgan [] 
Sent: Wednesday, March 26, 2014 10:45 PM
Subject: RE: Solr 4.3.1 memory swapping

Okay I'll post some shots somewhere people can get to them to demonstrate what I'm seeing.
Unfortunately I just deployed some unrelated stuff to Solr that caused me to restart each
node in the SolrCloud cluster. So right now the swap usage is minimal. I'll let it grow for
a few days then send some URLs to the list.

BTW, we're running RHEL 5.9 (Tikanga) and uname -a reports:

Linux da-pans-xxx 2.6.18-348.12.1.el5 #1 SMP Mon Jul 1 17:54:12 EDT 2013 x86_64 x86_64 x86_64


-----Original Message-----
From: Shawn Heisey []
Sent: Wednesday, March 26, 2014 8:14 PM
Subject: RE: Solr 4.3.1 memory swapping

> Thanks - we're currently running Solr inside of RHEL virtual machines 
> inside of VMware. Running "numactl --hardware" inside the VM shows the
> following:
> available: 1 nodes (0)
> node 0 size: 16139 MB
> node 0 free: 364 MB
> node distances:
> node   0
>   0:  10
> So there is only one node being shown.  So there is only one node and 
> only one memory bank.  Am I correct in assuming that means NUMA can't 
> be the issue?
> My best guess as to what is going on relates to that big memory-mapped 
> file Solr allocates. Our search index is about 60GB or so, much bigger 
> than the 16GB RAM the operating system has to work with. Could it be 
> that the swapping is due to the memory-mapped file in some way?

If mmap is leading to swapping, that's a serious operating system glitch.
That's not supposed to happen. The numa idea is the only thing I know about that could cause
this to happen, assuming that there's not something else on the system that's using memory.

If you could run top, press shift-M to sort by memory, and the get a screenshot, that would
be good. Be sure the terminal has enough height that we can see quite a few of the top entries.


View raw message