lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joshi, Shital" <Shital.Jo...@gs.com>
Subject RE: Solr4 performance
Date Mon, 24 Feb 2014 22:35:08 GMT
Thanks. 

We found some evidence that this could be the issue. We're monitoring closely to confirm this.


One question though: none of our nodes show more that 50% of physical memory used. So there
is enough memory available for memory mapped files. Can this kind of pause still happen? 


-----Original Message-----
From: Michael Della Bitta [mailto:michael.della.bitta@appinions.com] 
Sent: Friday, February 21, 2014 5:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr4 performance

It could be that your query is churning the page cache on that node
sometimes, so Solr pauses so the OS can drag those pages off of disk. Have
you tried profiling your iowait in top or iostat during these pauses?
(assuming you're using linux).

Michael Della Bitta

Applications Developer

o: +1 646 532 3062

appinions inc.

"The Science of Influence Marketing"

18 East 41st Street

New York, NY 10017

t: @appinions <https://twitter.com/Appinions> | g+:
plus.google.com/appinions<https://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts>
w: appinions.com <http://www.appinions.com/>


On Fri, Feb 21, 2014 at 5:20 PM, Joshi, Shital <Shital.Joshi@gs.com> wrote:

> Thanks for your answer.
>
> We confirmed that it is not GC issue.
>
> The auto warming query looks good too and queries before and after the
> long running query comes back really quick. The only thing stands out is
> shard on which query takes long time has couple million more documents than
> other shards.
>
> -----Original Message-----
> From: Michael Della Bitta [mailto:michael.della.bitta@appinions.com]
> Sent: Thursday, February 20, 2014 5:26 PM
> To: solr-user@lucene.apache.org
> Subject: RE: Solr4 performance
>
> Hi,
>
> As for your first question, setting openSearcher to true means you will see
> the new docs after every hard commit. Soft and hard commits only become
> isolated from one another with that set to false.
>
> Your second problem might be explained by your large heap and garbage
> collection. Walking a heap that large can take an appreciable amount of
> time. You might consider turning on the JVM options for logging GC and
> seeing if you can correlate your slow responses to times when your JVM is
> garbage collecting.
>
> Hope that helps,
> On Feb 20, 2014 4:52 PM, "Joshi, Shital" <Shital.Joshi@gs.com> wrote:
>
> > Hi!
> >
> > I have few other questions regarding Solr4 performance issue we're
> facing.
> >
> > We're committing data to Solr4 every ~30 seconds (up to 20K rows). We use
> > commit=false in update URL. We have only hard commit setting in Solr4
> > config.
> >
> > <autoCommit>
> >        <maxTime>${solr.autoCommit.maxTime:600000}</maxTime>
> >        <maxDocs>100000</maxDocs>
> >        <openSearcher>true</openSearcher>
> >      </autoCommit>
> >
> >
> > Since we're not using Soft commit at all (commit=false), the caches will
> > not get reloaded for every commit and recently added documents will not
> be
> > visible, correct?
> >
> > What we see is queries which usually take few milli seconds, takes ~40
> > seconds once in a while. Can high IO during hard commit cause queries to
> > slow down?
> >
> > For some shards we see 98% full physical memory. We have 60GB machine (30
> > GB JVM, 28 GB free RAM, ~35 GB of index). We're ruling out that high
> > physical memory would cause queries to slow down. We're in process of
> > reducing JVM size anyways.
> >
> > We have never run optimization till now. QA optimization didn't yield in
> > performance gain.
> >
> > Thanks much for all help.
> >
> > -----Original Message-----
> > From: Shawn Heisey [mailto:solr@elyograg.org]
> > Sent: Tuesday, February 18, 2014 4:55 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Solr4 performance
> >
> > On 2/18/2014 2:14 PM, Joshi, Shital wrote:
> > > Thanks much for all suggestions. We're looking into reducing allocated
> > heap size of Solr4 JVM.
> > >
> > > We're using NRTCachingDirectoryFactory. Does it use MMapDirectory
> > internally? Can someone please confirm?
> >
> > In Solr, NRTCachingDirectory does indeed use MMapDirectory as its
> > default delegate.  That's probably also the case with Lucene -- these
> > are Lucene classes, after all.
> >
> > MMapDirectory is almost always the most efficient way to handle on-disk
> > indexes.
> >
> > Thanks,
> > Shawn
> >
> >
>

Mime
View raw message