lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erick Erickson <erickerick...@gmail.com>
Subject Re: a core for every user, lots of users... are there issues
Date Wed, 04 Dec 2013 00:37:26 GMT
bq: Do you have any sense of what a good upper limit might be, or how we
might figure that out?

As always, "it depends" (tm). And the biggest thing it depends upon is the
number of simultaneous users you have and the size of their indexes. And
we've arrived at the black box of estimating size again. Siiigggghh... I'm
afraid that the only way is to test and establish some rules of thumb.

The transient core constraint will limit the number of cores loaded at
once. If you allow too many cores at once, you'll get OOM errors when all
the users pile on at the same time.

Let's say you've determined that 100 is the limit for transient cores. What
I suspect you'll see is degrading response times if this is too low. Say
110 users are signed on and say they submit queries perfectly in order, one
after the other. Every request will require the core to be opened and it'll
take a bit. So that'll be a flag.

Or that's a fine limit but your users have added more and more documents
and you're coming under memory pressure.

As you can tell I don't have any good answers. I've seen between 10M and
300M documents on a single machine....

BTW, on a _very_ casual test I found about 1000 cores/second were found in
discovery mode. While they aren't loaded if they're transient, it's still a
consideration if you have 10s of thousands.

Best,
Erick



On Tue, Dec 3, 2013 at 3:33 PM, hank williams <hank777@gmail.com> wrote:

> On Tue, Dec 3, 2013 at 3:20 PM, Erick Erickson <erickerickson@gmail.com
> >wrote:
>
> > You probably want to look at "transient cores", see:
> > http://wiki.apache.org/solr/LotsOfCores
> >
> > But millions will be "interesting" for a single node, you must have some
> > kind of partitioning in mind?
> >
> >
> Wow. Thanks for that great link. Yes we are sharding so its not like there
> would be millions of cores on one machine or even cluster. And since the
> cores are one per user, this is a totally clean approach. But still we want
> to make sure that we are not overloading the machine. Do you have any sense
> of what a good upper limit might be, or how we might figure that out?
>
>
>
> > Best,
> > Erick
> >
> >
> > On Tue, Dec 3, 2013 at 2:38 PM, hank williams <hank777@gmail.com> wrote:
> >
> > >  We are building a system where there is a core for every user. There
> > will
> > > be many tens or perhaps ultimately hundreds of thousands or millions of
> > > users. We do not need each of those users to have “warm” data in
> memory.
> > In
> > > fact doing so would consume lots of memory unnecessarily, for users
> that
> > > might not have logged in in a long time.
> > >
> > > So my question is, is the default behavior of Solr to try to keep all
> of
> > > our cores warm, and if so, can we stop it? Also given the number of
> cores
> > > that we will likely have is there anything else we should be keeping in
> > > mind to maximize performance and minimize memory usage?
> > >
> >
>
>
>
> --
> blog: whydoeseverythingsuck.com
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message