Return-Path: Delivered-To: apmail-lucene-solr-user-archive@minotaur.apache.org Received: (qmail 75087 invoked from network); 18 Aug 2009 04:12:07 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 18 Aug 2009 04:12:07 -0000 Received: (qmail 32940 invoked by uid 500); 18 Aug 2009 04:12:24 -0000 Delivered-To: apmail-lucene-solr-user-archive@lucene.apache.org Received: (qmail 32833 invoked by uid 500); 18 Aug 2009 04:12:24 -0000 Mailing-List: contact solr-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: solr-user@lucene.apache.org Delivered-To: mailing list solr-user@lucene.apache.org Received: (qmail 32823 invoked by uid 99); 18 Aug 2009 04:12:24 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Aug 2009 04:12:24 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of lists@nabble.com designates 216.139.236.158 as permitted sender) Received: from [216.139.236.158] (HELO kuber.nabble.com) (216.139.236.158) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Aug 2009 04:12:13 +0000 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1MdG36-0004r6-IB for solr-user@lucene.apache.org; Mon, 17 Aug 2009 21:11:52 -0700 Message-ID: <25018165.post@talk.nabble.com> Date: Mon, 17 Aug 2009 21:11:52 -0700 (PDT) From: Funtick To: solr-user@lucene.apache.org Subject: Re: JVM Heap utilization & Memory leaks with Solr In-Reply-To: <42eef5910908042140t693b820cta7431ffd2a24dc18@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: fuad@efendi.ca References: <42eef5910908032209g4da0f901o198b490aab82f192@mail.gmail.com> <880225.75103.qm@web50306.mail.re2.yahoo.com> <42eef5910908042140t693b820cta7431ffd2a24dc18@mail.gmail.com> X-Virus-Checked: Checked by ClamAV on apache.org BTW, you should really prefer JRockit which really rocks!!! "Mission Control" has necessary toolongs; and JRockit produces _nice_ exception stacktrace (explaining almost everything) in case of even OOM which SUN JVN still fails to produce. SolrServlet still catches "Throwable": } catch (Throwable e) { SolrException.log(log,e); sendErr(500, SolrException.toStr(e), request, response); } finally { Rahul R wrote: > > Otis, > Thank you for your response. I know there are a few variables here but the > difference in memory utilization with and without shards somehow leads me > to > believe that the leak could be within Solr. > > I tried using a profiling tool - Yourkit. The trial version was free for > 15 > days. But I couldn't find anything of significance. > > Regards > Rahul > > > On Tue, Aug 4, 2009 at 7:35 PM, Otis Gospodnetic > > wrote: > >> Hi Rahul, >> >> A) There are no known (to me) memory leaks. >> I think there are too many variables for a person to tell you what >> exactly >> is happening, plus you are dealing with the JVM here. :) >> >> Try jmap -histo:live PID-HERE | less and see what's using your memory. >> >> Otis >> -- >> Sematext is hiring -- http://sematext.com/about/jobs.html?mls >> Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR >> >> >> >> ----- Original Message ---- >> > From: Rahul R >> > To: solr-user@lucene.apache.org >> > Sent: Tuesday, August 4, 2009 1:09:06 AM >> > Subject: JVM Heap utilization & Memory leaks with Solr >> > >> > I am trying to track memory utilization with my Application that uses >> Solr. >> > Details of the setup : >> > -3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0 >> > - Hardware : 12 CPU, 24 GB RAM >> > >> > For testing during PSR I am using a smaller subset of the actual data >> that I >> > want to work with. Details of this smaller sub-set : >> > - 5 million records, 4.5 GB index size >> > >> > Observations during PSR: >> > A) I have allocated 3.2 GB for the JVM(s) that I used. After all users >> > logout and doing a force GC, only 60 % of the heap is reclaimed. As >> part >> of >> > the logout process I am invalidating the HttpSession and doing a >> close() >> on >> > CoreContainer. From my application's side, I don't believe I am holding >> on >> > to any resource. I wanted to know if there are known issues surrounding >> > memory leaks with Solr ? >> > B) To further test this, I tried deploying with shards. 3.2 GB was >> allocated >> > to each JVM. All JVMs had 96 % free heap space after start up. I got >> varying >> > results with this. >> > Case 1 : Used 6 weblogic domains. My application was deployed one 1 >> domain. >> > I split the 5 million index into 5 parts of 1 million each and used >> them >> as >> > shards. After multiple users used the system and doing a force GC, >> around >> 94 >> > - 96 % of heap was reclaimed in all the JVMs. >> > Case 2: Used 2 weblogic domains. My application was deployed on 1 >> domain. >> On >> > the other, I deployed the entire 5 million part index as one shard. >> After >> > multiple users used the system and doing a gorce GC, around 76 % of the >> heap >> > was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM where >> my >> > application was running. This result further convinces me that my >> > application can be absolved of holding on to memory resources. >> > >> > I am not sure how to interpret these results ? For searching, I am >> using >> > Without Shards : EmbeddedSolrServer >> > With Shards :CommonsHttpSolrServer >> > In terms of Solr objects this is what differs in my code between normal >> > search and shards search (distributed search) >> > >> > After looking at Case 1, I thought that the CommonsHttpSolrServer was >> more >> > memory efficient but Case 2 proved me wrong. Or could there still be >> memory >> > leaks in my application ? Any thoughts, suggestions would be welcome. >> > >> > Regards >> > Rahul >> >> > > -- View this message in context: http://www.nabble.com/JVM-Heap-utilization---Memory-leaks-with-Solr-tp24802380p25018165.html Sent from the Solr - User mailing list archive at Nabble.com.