Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 117F6200B36 for ; Wed, 6 Jul 2016 20:37:41 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 0FEC2160A64; Wed, 6 Jul 2016 18:37:41 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 56525160A55 for ; Wed, 6 Jul 2016 20:37:40 +0200 (CEST) Received: (qmail 92267 invoked by uid 500); 6 Jul 2016 18:37:38 -0000 Mailing-List: contact solr-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: solr-user@lucene.apache.org Delivered-To: mailing list solr-user@lucene.apache.org Received: (qmail 92256 invoked by uid 99); 6 Jul 2016 18:37:38 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Jul 2016 18:37:38 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id A252F1A6065 for ; Wed, 6 Jul 2016 18:37:37 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.28 X-Spam-Level: X-Spam-Status: No, score=0.28 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id U3xu_5emoTBt for ; Wed, 6 Jul 2016 18:37:36 +0000 (UTC) Received: from out.West.EXCH082.serverdata.net (cas082-co-5.exch082.serverdata.net [199.193.204.201]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 9A4DB5FB6F for ; Wed, 6 Jul 2016 18:37:35 +0000 (UTC) Received: from MBX082-W3-CO-2.EXCH082.serverpod.net (10.224.169.72) by MBX082-W3-CO-1.EXCH082.serverpod.net (10.224.169.71) with Microsoft SMTP Server (TLS) id 15.0.1178.4; Wed, 6 Jul 2016 11:37:28 -0700 Received: from MBX082-W3-CO-2.EXCH082.serverpod.net ([10.224.169.72]) by MBX082-W3-CO-2.EXCH082.SERVERPOD.NET ([10.224.169.72]) with mapi id 15.00.1178.000; Wed, 6 Jul 2016 11:37:28 -0700 From: Deeksha Sharma To: solr-user Subject: Re: SolrCloud Node fails that was hosting replicas of a collection Thread-Topic: SolrCloud Node fails that was hosting replicas of a collection Thread-Index: AQHR081VHg2MfUstBUG3u73UxGt+XKAEc4sAgAdObiA= Date: Wed, 6 Jul 2016 18:37:28 +0000 Message-ID: <1467830248383.60984@palamida.com> References: <1467400702410.85539@palamida.com>, In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [72.52.108.130] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 archived-at: Wed, 06 Jul 2016 18:37:41 -0000 Hi Erick,=0A= =0A= Thanks for your reply, but I did used the ADD REPLICA api at the first. How= ever, since the newly added Sold node is down, it throws Exception " One of= the nodes is down" and hence the operation fails.=0A= =0A= I noticed that the node becomes GREEN only after adding the core via Admin = UI.=0A= Also, I am using docker containers on AWS machines to have multiple solr in= stances running inside them. I was trying to test what will happen if one o= f the docker containers exit and I realized this issue.=0A= =0A= =0A= Thanks=0A= Deeksha=0A= =0A= ________________________________________=0A= From: Erick Erickson =0A= Sent: Friday, July 01, 2016 12:58 PM=0A= To: solr-user=0A= Subject: Re: SolrCloud Node fails that was hosting replicas of a collection= =0A= =0A= Please do _not_ use the admin UI core creating screen when dealing=0A= with SolrCloud. It can work, but you have to get everything exactly=0A= right.=0A= =0A= Instead, you should be using the ADDREPLICA command from the=0A= Collections API, see:=0A= https://cwiki.apache.org/confluence/display/solr/Collections+API=0A= =0A= Although I do have to ask why the Solr node is going down. If it's not some= thing=0A= permanent, the replicas should return to green after the node is re-started= .=0A= =0A= There are plans to provide a screen in the new Admin UI to allow you=0A= to add replicas to a collection and the like, but that code hasn't=0A= been added yet.=0A= =0A= Best,=0A= Erick=0A= =0A= On Fri, Jul 1, 2016 at 12:18 PM, Deeksha Sharma wrot= e:=0A= > Currently I am building a SolrCloud cluster with 3 Zookeepers (ensemble) = and 4 solr instances. Cluster is hosting 4 collections and their replicas.= =0A= >=0A= >=0A= > When one Solr node say Solr1 goes down (hosting 2 replicas of collection1= and collection2), I add a new node to the cluster and that node in Admin U= I is brown in color which means the new node is down.=0A= >=0A= >=0A= > When I create the core on Admin UI to this new solr instance (these cores= are the 2 replicas that Solr1 was hosting) , the new node becomes green (u= p and running).=0A= >=0A= >=0A= > Am I doing the right thing by adding the new node and adding cores to it = via Admin UI or there is a better way of doing this?=0A= >=0A= >=0A= > Should Solr automatically host those 2 replicas to the newly added node o= r we have to manually add cores to it?=0A= >=0A= >=0A=