lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jimtronic <jimtro...@gmail.com>
Subject Re: Memory Guidance
Date Mon, 11 Mar 2013 18:01:58 GMT
Thanks, this is on linux and it's dedicated to solr.

It's been hard for me to pinpoint problems -- or even that there is a
problem!

My general approach has been to see how much I can put onto one box. So I
have 13 separate solr cores, some of which are very active in terms of
writes, reads, and sorts. There's also periodic DIH updates.

I'm running load tests that try to mimic a set of real users signing up and
doing various things on the site with some random think times. Everything
works great for a couple hours, but then slows down.

I realize this is all kind of vague, but I'm at the point where I'm
wondering what I should even be monitoring. The main thing I'm tracking is
QTime and the number of concurrent users I'm able to support in the tests.

Jim

On Mon, Mar 11, 2013 at 12:37 PM, Shawn Heisey-4 [via Lucene] <
ml-node+s472066n4046382h58@n3.nabble.com> wrote:

> On 3/11/2013 11:14 AM, Shawn Heisey wrote:
>
> > On 3/10/2013 8:00 PM, jimtronic wrote:
> >> I'm having trouble finding some problems while load testing my setup.
> >>
> >> If you saw these numbers on your dashboard, would they worry you?
> >>
> >> Physical Memory  97.6%
> >> 14.64 GB of 15.01 GB
> >>
> >> File Descriptor Count  19.1%
> >> 196 of 1024
> >>
> >> JVM-Memory  95%
> >> 1.67 GB (dark gray)
> >> 1.76 GB (med gray)
> >> 1.76 GB
> >
> > What OS?  If it's a unix/linux environment, the full output of the
> > 'free' command will be important.  Generally speaking, it's normal for
> > any computer (client or server, regardless of OS) to use all available
> > memory when under load.
>
> Replying to myself.  The cold must be getting to me. :)
>
> If nothing else is running on this server except for Solr, and your
> index is less than 15GB in size, these numbers would not worry me at
> all.  If your index is less than 30GB in size, you might still be OK,
> but at that point your index would exceed available RAM.  Chances are
> that you would be able to cache enough of it for good performance,
> depending on your schema.  The reason that I say this is that you have
> about 2GB of RAM give to Solr, leaving about 13-14GB for OS disk caching.
>
> If the server is shared with other things, particularly a busy database
> or busy web server, then the above paragraph might not apply - you may
> not have enough resources for Solr to work effectively.
>
> Thanks,
> Shawn
>
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
> http://lucene.472066.n3.nabble.com/Memory-Guidance-tp4046207p4046382.html
>  To unsubscribe from Memory Guidance, click here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4046207&code=amltdHJvbmljQGdtYWlsLmNvbXw0MDQ2MjA3fDEzMjQ4NDk0MTQ=>
> .
> NAML<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://lucene.472066.n3.nabble.com/Memory-Guidance-tp4046207p4046393.html
Sent from the Solr - User mailing list archive at Nabble.com.
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message