lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitry Kan <solrexp...@gmail.com>
Subject Re: unusually high 4.10.2 vs 4.3.1 RAM consumption
Date Tue, 17 Feb 2015 11:18:36 GMT
;) ok. Currently I'm trying parallel GC options, mentioned here:
http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/101377

At least the saw-tooth RAM chart is starting to shape up.

On Tue, Feb 17, 2015 at 12:55 PM, Markus Jelsma <markus.jelsma@openindex.io>
wrote:

> I would have shared it if i had one :)
>
> -----Original message-----
> > From:Dmitry Kan <solrexpert@gmail.com>
> > Sent: Tuesday 17th February 2015 11:40
> > To: solr-user@lucene.apache.org
> > Subject: Re: unusually high 4.10.2 vs 4.3.1 RAM consumption
> >
> > Have you found an explanation to that?
> >
> > On Tue, Feb 17, 2015 at 12:12 PM, Markus Jelsma <
> markus.jelsma@openindex.io>
> > wrote:
> >
> > > We have seen an increase between 4.8.1 and 4.10.
> > >
> > > -----Original message-----
> > > > From:Dmitry Kan <solrexpert@gmail.com>
> > > > Sent: Tuesday 17th February 2015 11:06
> > > > To: solr-user@lucene.apache.org
> > > > Subject: unusually high 4.10.2 vs 4.3.1 RAM consumption
> > > >
> > > > Hi,
> > > >
> > > > We are currently comparing the RAM consumption of two parallel Solr
> > > > clusters with different solr versions: 4.10.2 and 4.3.1.
> > > >
> > > > For comparable index sizes of a shard (20G and 26G), we observed 9G
> vs
> > > 5.6G
> > > > RAM footprint (reserved RAM as seen by top), 4.3.1 being the winner.
> > > >
> > > > We have not changed the solrconfig.xml to upgrade to 4.10.2 and have
> > > > reindexed data from scratch. The commits are all controlled on the
> > > client,
> > > > i.e. not auto-commits.
> > > >
> > > > Solr: 4.10.2 (high load, mass indexing)
> > > > Java: 1.7.0_76 (Oracle)
> > > > -Xmx25600m
> > > >
> > > >
> > > > Solr: 4.3.1 (normal load, no mass indexing)
> > > > Java: 1.7.0_11 (Oracle)
> > > > -Xmx25600m
> > > >
> > > > The RAM consumption remained the same after the load has stopped on
> the
> > > > 4.10.2 cluster. Manually collecting the memory on a 4.10.2 shard via
> > > > jvisualvm dropped the used RAM from 8,5G to 0,5G. But the reserved
> RAM as
> > > > seen by top remained at 9G level.
> > > >
> > > > This unusual spike happened during mass data indexing.
> > > >
> > > > What else could be the artifact of such a difference -- Solr or JVM?
> Can
> > > it
> > > > only be explained by the mass indexing? What is worrisome is that the
> > > > 4.10.2 shard reserves 8x times it uses.
> > > >
> > > > What can be done about this?
> > > >
> > > > --
> > > > Dmitry Kan
> > > > Luke Toolbox: http://github.com/DmitryKey/luke
> > > > Blog: http://dmitrykan.blogspot.com
> > > > Twitter: http://twitter.com/dmitrykan
> > > > SemanticAnalyzer: www.semanticanalyzer.info
> > > >
> > >
> >
> >
> >
> > --
> > Dmitry Kan
> > Luke Toolbox: http://github.com/DmitryKey/luke
> > Blog: http://dmitrykan.blogspot.com
> > Twitter: http://twitter.com/dmitrykan
> > SemanticAnalyzer: www.semanticanalyzer.info
> >
>



-- 
Dmitry Kan
Luke Toolbox: http://github.com/DmitryKey/luke
Blog: http://dmitrykan.blogspot.com
Twitter: http://twitter.com/dmitrykan
SemanticAnalyzer: www.semanticanalyzer.info

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message