Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 1F942200BF4 for ; Fri, 6 Jan 2017 15:31:44 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 1C030160B37; Fri, 6 Jan 2017 14:31:44 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 3AF70160B1F for ; Fri, 6 Jan 2017 15:31:43 +0100 (CET) Received: (qmail 77459 invoked by uid 500); 6 Jan 2017 14:31:42 -0000 Mailing-List: contact general-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: general@lucene.apache.org Delivered-To: mailing list general@lucene.apache.org Received: (qmail 77443 invoked by uid 99); 6 Jan 2017 14:31:41 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Jan 2017 14:31:41 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 16D5DC03A6 for ; Fri, 6 Jan 2017 14:31:41 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.48 X-Spam-Level: ** X-Spam-Status: No, score=2.48 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=cominvent-com.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id PRj_chWV-gXc for ; Fri, 6 Jan 2017 14:31:38 +0000 (UTC) Received: from mail-lf0-f43.google.com (mail-lf0-f43.google.com [209.85.215.43]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id E97865F1B8 for ; Fri, 6 Jan 2017 14:31:37 +0000 (UTC) Received: by mail-lf0-f43.google.com with SMTP id k86so55185532lfi.0 for ; Fri, 06 Jan 2017 06:31:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cominvent-com.20150623.gappssmtp.com; s=20150623; h=from:mime-version:subject:date:references:to:in-reply-to:message-id; bh=PuoF6veF3tVrEbkDyf038/ghBwaVRGGAszAShcPpDcc=; b=wOQCEYsaxKz67a7vqi/Fms7r7lVOSTybTkt/9lPeM1f1ZDEbs0mLZ4nr7ts/IWgmcB poKrueLiQff0ZZaLevPtfYPTDjk9q44OFzU7U4yf6ZHR/Q4fhHT2jsLqnesq7VyPOq5n ZlDfdMDc6JmOXfISGmG2xqUTPEYPDK2NcidKI7e4dUlCuvdfpwvUyxSLkZjeMMmkGs8z krNwJbpIFShQiB3FuOLRrd5SBSrFnmbCbbU+qDi73/2+LCAj7UwqTD+TvsiUlGaUZcpG +Cg4WddT7vHGeBfrjJoDfHUfx0g4m9kKOo6nclwLjKHY9lHy08oVVRXzEBo3fw4FHZQ1 kUGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:mime-version:subject:date:references:to :in-reply-to:message-id; bh=PuoF6veF3tVrEbkDyf038/ghBwaVRGGAszAShcPpDcc=; b=DW/ZEe/xFoFCc2q/n8OpNWfySjvsmJzUJHismDBT/n3YxLyVbG63enjOn6OIxpACx2 CjX9jJMA3JsPdwKuP8LGcjKJBK/pgseMBngrU1daPygKgQ5IPqfXmSoHpc0nLuIfhtDS tjPDLpdejsyq1HJ230IbpjGpgNpbdedkDUocXO2n9pZu/W6/06wZVKYB0qLDBh646nZC AcQ2e1+L7f4vXxh2s0dhL6V+Av91HV+4UL6f8oboBmOtRIyulqnvSC/lflBBZ2aYRz64 2D17NXYSWmOI5dIWHuR8WppAhgWbZVP2psfL+3LiAp5tFJcPpb7Vh38iCzn8ZrFCA4t9 3DVw== X-Gm-Message-State: AIkVDXLx6isLlq5zPiYRIgjL+iB8BUCqQMzTnLXsOPSKd4BAUdkVJVz2zhj8TAWQbV17+A== X-Received: by 10.25.228.73 with SMTP id b70mr1314605lfh.44.1483713095248; Fri, 06 Jan 2017 06:31:35 -0800 (PST) Received: from [192.168.127.63] ([195.159.250.196]) by smtp.gmail.com with ESMTPSA id 95sm5353094lft.27.2017.01.06.06.31.33 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 06 Jan 2017 06:31:34 -0800 (PST) From: =?utf-8?Q?Jan_H=C3=B8ydahl?= Content-Type: multipart/alternative; boundary="Apple-Mail=_9B2A15C9-79AC-4AD3-BA9B-61A6FAF26828" Mime-Version: 1.0 (Mac OS X Mail 10.1 \(3251\)) Subject: Re: Question: Does Zookeeper/SolrCloud handle the failover? Date: Fri, 6 Jan 2017 15:31:33 +0100 References: <65090959df5b4f4e880bbfb983e1ddee@FarmMail.acsysoffice.com> To: general@lucene.apache.org In-Reply-To: <65090959df5b4f4e880bbfb983e1ddee@FarmMail.acsysoffice.com> Message-Id: X-Mailer: Apple Mail (2.3251) archived-at: Fri, 06 Jan 2017 14:31:44 -0000 --Apple-Mail=_9B2A15C9-79AC-4AD3-BA9B-61A6FAF26828 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Hi, That should work well, as long as the list of Solr nodes is fairly = constant. Using ZK your client will always have the correct list of = nodes and will always avoid unnecessary hops. With a hardcoded list of = node names you=E2=80=99ll need to update your LB when you make changes = to your cluster, and there will be more hops, but that is perfectly = acceptable and neglectible for many users. Another solution is to talk to ZK from your .NET application code = directly, just to pull out the cluster state for each collection, and = thus be able to pick nodes yourself in an intelligent way... -- Jan H=C3=B8ydahl, search solution architect Cominvent AS - www.cominvent.com > 6. jan. 2017 kl. 15.20 skrev Mike Linnetz = : >=20 > Thanks for the reply! > What you describe makes sense. >=20 > I have another question as I'd like to present my team all of the = options. >=20 > As an alternative to using SolrNetCloud to load balance, couldn't I = just configure my network load balancer to distribute requests among my = cloud instances? As I understand it, Zookeeper and SolrCloud should = take care of propagating any updates across all nodes in a collection if = a request is routed to any node in the collection. > Is there anything I'm missing such that that solution would not work? >=20 > Thanks, > Mike >=20 >=20 > -----Original Message----- > From: Mikhail Khludnev [mailto:mkhl@apache.org]=20 > Sent: Thursday, January 5, 2017 4:26 PM > To: general@lucene.apache.org > Subject: Re: Question: Does Zookeeper/SolrCloud handle the failover? >=20 > Hello, Mike! >=20 > As far as I've got, you need to connect to SolrCloud (Zookeeper) via > Solr.Net library. > Here is the reference to the fork where it's done. > https://github.com/mausch/SolrNet/issues/174#issuecomment-143697474 > Let me know if you need more info. >=20 > On Thu, Jan 5, 2017 at 8:50 PM, Mike Linnetz = > wrote: >=20 >> Hello, >>=20 >> I am new to SolrCloud and Zookeeper, and there's a piece of the = puzzle on >> which I'm unclear. I don't understand whether/how SolrCloud and = Zookeeper >> handles the high availability aspect. That is, once I have the = Zookeeper >> ensemble set up, now how do I reference the collection in a >> "high-availability" fashion so that if one Solr instance is down, it = will >> automatically route to another instance? >>=20 >> Am I thinking that SolrCloud does more than it actually does? That = is, >> could I instead handle all Load Balancing outside of Zookeeper, and = just >> use our company's network load balancer to route/distribute traffic = for a >> single consolidated URL:port to the different actual Solr = Instances/Ports? >>=20 >> Here's more details in case it helps: >> I have an ASP.NET application which uses SolrNet to query and write = to a >> standalone Solr instance. In my program , I specify the = hostname:port of >> my Solr instance: >> Startup.Init("hostname:port"); >> This setup was used for development and testing. >>=20 >> Now, for production, I need to implement Solr in a H.A. fashion so = that I >> don't have a single point of failure. So, following Apache's >> documentation, I understand that the solution would be to set up an >> external Zookeeper Ensemble. >> So, let's say I set up a new Zookeeper Ensemble (three instances). = Let's >> also say I have three Solr instances, #shards =3D 2, #replicas per = shard =3D 2. >>=20 >> Once I have the Zookeeper ensemble set up, now how do I reference the >> collection? >> Previously, my code references a single standalone Solr instance, " >> http://solrserver1:9983". What do I point it to now? >>=20 >> If I point it to any of the three Solr instances, and if that = instance I'm >> referencing goes down, it means the request would fail, right? >>=20 >> I read similar questions online, for example = https://groups.google.com/ >> forum/#!msg/solrnet/-PeaGrLAMtw/pAfxuoYLVnIJ, and the answer seems to = be >> that SolrNet doesn't support this type of HA. So, do I understand >> correctly that the load balancing aspect needs to be handled on the >> client-side (or via a network load balancer), not the = zookeeper/solrcloud >> server side? >>=20 >> Thanks, >> Mike >>=20 >>=20 >>=20 >>=20 >=20 >=20 > --=20 > Sincerely yours > Mikhail Khludnev --Apple-Mail=_9B2A15C9-79AC-4AD3-BA9B-61A6FAF26828--