Return-Path: X-Original-To: apmail-lucene-solr-user-archive@minotaur.apache.org Delivered-To: apmail-lucene-solr-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4AF7710B0F for ; Mon, 22 Apr 2013 11:25:11 +0000 (UTC) Received: (qmail 90337 invoked by uid 500); 22 Apr 2013 11:25:07 -0000 Delivered-To: apmail-lucene-solr-user-archive@lucene.apache.org Received: (qmail 90263 invoked by uid 500); 22 Apr 2013 11:25:07 -0000 Mailing-List: contact solr-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: solr-user@lucene.apache.org Delivered-To: mailing list solr-user@lucene.apache.org Received: (qmail 90247 invoked by uid 99); 22 Apr 2013 11:25:07 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Apr 2013 11:25:07 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of furkankamaci@gmail.com designates 209.85.219.51 as permitted sender) Received: from [209.85.219.51] (HELO mail-oa0-f51.google.com) (209.85.219.51) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Apr 2013 11:25:02 +0000 Received: by mail-oa0-f51.google.com with SMTP id k14so5821392oag.10 for ; Mon, 22 Apr 2013 04:24:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=2ONWTq5QedIAzcH5I1fkl6I0CJAL4l3gMw3sbO3af5M=; b=UQRvGjxp1efG6N7JBlES9xBF3GiBLa2eaf+UJcOIoingH0oVJAER6B9qEOtAUklGT2 WUlosyCxG0S4b5XVpFXGtgHlXPb8/Av6QdbQIlgvslKS0OO6OQYE7TXMMFv0B8BcwOae jlnj38cgs/DgSXK8QO5vnYAqzTx+bcma+P8eEebmLuXoIsA4g5yT8LyrbcCMytcvIgFx 3BzJXu5HDhNcbXzLDwEPutRKZtTa6ElBFruL2kmYnRI9xycprZMV/syNI4p3NFz5owQ0 QVKWyUNQaW2H02rxd43dODTL+Y/SlJJNSZUOocSLPaOJSFQtuFs2f0E8pOoTM7GDQ28v 1WWw== MIME-Version: 1.0 X-Received: by 10.60.135.234 with SMTP id pv10mr14586132oeb.101.1366629881545; Mon, 22 Apr 2013 04:24:41 -0700 (PDT) Received: by 10.76.143.163 with HTTP; Mon, 22 Apr 2013 04:24:41 -0700 (PDT) In-Reply-To: References: Date: Mon, 22 Apr 2013 14:24:41 +0300 Message-ID: Subject: Re: Where to use replicationFactor and maxShardsPerNode at SolrCloud? From: Furkan KAMACI To: solr-user@lucene.apache.org Content-Type: multipart/alternative; boundary=047d7b41c1eca10aad04daf15023 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b41c1eca10aad04daf15023 Content-Type: text/plain; charset=ISO-8859-1 Sorry but if I have 10 shards and a collection with replication factor of 1 and if I start up 30 nodes what happens to that last 10 nodes? I mean: 10 nodes as leader 10 nodes as replica if I don't specify replication factor there was going to be a round robin system that assigns other 10 machine as: + 10 nodes as replica However what will happen to that 10 nodes when I specify replication factor? 2013/4/22 Erick Erickson > 1) Imagine you have lots and lots and lots of different Solr indexes > and a 50 node cluster. Further imagine that one of those indexes has 2 > shards, and a leader + shard is adequate to handle the load. You need > some way to limit the number of nodes your index gets distributed to, > that's what replicationFactor is for. So in this case > replicationFactor=2 will stop assigning nodes to that particular > collection after there's a leader + 1 replica > > 2> In the system you described, there won't be more than one > shard/node. But one strategy for growth is to "overshard". That is, in > the early days you put (numbers from thin air) 10 shards/node and they > are all quite small. As your index grows, you move to two nodes with 5 > shards each. And later to 5 nodes with 2 shards and so on. There are > cases where you want some way to make the most of your hardware yet > plan for expansion. > > Best > Erick > > On Sun, Apr 21, 2013 at 3:51 PM, Furkan KAMACI > wrote: > > I know that: when using SolrCloud we define the number of shards into the > > system. When we start up new Solr instances each one will be a a leader > for > > a shard, and if I continue to start up new Solr instances (that has > > exceeded the number number of shards) each one will be a replica for each > > leader as a round robin process. > > > > However when I read wiki there are two parameters: *replicationFactor > *and * > > maxShardsPerNode. > > > > *1) Can you give details about what are they. If all newly added Solr > > instances becomes a replica what is that replication factor for? > > 2) If what I wrote is true about that round robin process what is that * > > maxShardsPerNode*? How can be more than one shard at the system I > described? > --047d7b41c1eca10aad04daf15023--