Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E5BFF200C2F for ; Mon, 6 Mar 2017 23:42:18 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id E46A1160B76; Mon, 6 Mar 2017 22:42:18 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 3C8C0160B66 for ; Mon, 6 Mar 2017 23:42:18 +0100 (CET) Received: (qmail 12705 invoked by uid 500); 6 Mar 2017 22:42:17 -0000 Mailing-List: contact user-help@zookeeper.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@zookeeper.apache.org Delivered-To: mailing list user@zookeeper.apache.org Received: (qmail 12693 invoked by uid 99); 6 Mar 2017 22:42:16 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Mar 2017 22:42:16 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 4A1ECC1232 for ; Mon, 6 Mar 2017 22:42:16 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.303 X-Spam-Level: ** X-Spam-Status: No, score=2.303 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.096, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id Pqt2W1Lc1EhI for ; Mon, 6 Mar 2017 22:42:15 +0000 (UTC) Received: from mail-io0-f170.google.com (mail-io0-f170.google.com [209.85.223.170]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 84DAB5F23D for ; Mon, 6 Mar 2017 22:42:14 +0000 (UTC) Received: by mail-io0-f170.google.com with SMTP id l7so124255830ioe.3 for ; Mon, 06 Mar 2017 14:42:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=jfP9zxByMv2hK7wCmdxRSsKhqXGOCyYNQRYPLdJbGZ0=; b=cYAU5fHJVB1UR96HCcNe1CvWlvSB3HE3/ygFzofO+mRP/IteCoXw+PcO1NYu7Cgtgh xNDFmiFYcV8XzfxYwl3hpqLp+8B0HptRLM0uSuLtY8R57Jd+8AOsx1U+7onBBbxJAozQ imF76qPbpiKCcXcC2sOuuvWN9LK5mB6UPFA2qd67GJooQhyi5XLokhedRnRS79+i5W6p d+/dOp2f3uuLQM02kNBJxhsHYnpbu4ypT2BplC3vK+5UobYMI5TklpBPvPOmB+JGSaX1 JDLiVej3Ghoiu32mQ87cyG8Jf3JEzMsZ14orLim5r46PADrUbyRHKucxuJ2Eeck9+ksL s6Lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=jfP9zxByMv2hK7wCmdxRSsKhqXGOCyYNQRYPLdJbGZ0=; b=klsg1TikY9ooxcoy0GhryDBr6atLimkZaXoNnQqNbDZbvGkxpUDB0wQji88pHVpKWY PYgY4FI/P0l/dJyyTeic5sMRJ8qB8onF4aNBDWMMUpHlIlNUHwP1aXUd5q8N8k5AH75i r0DDqS56AeL+NSsqV3l8goehTIFzR6ghidV/UPHL+AY+8MLuv9wyGhjUh/l6/asxHm0p qO31hj1rbE6WBVnY/5cLT9JCtGIJFCGCBc56CYDPHy97e7DCc8eEWwZ/7r5x37l6VCGy NsKTOIynfoVnsDMPdEiPIbqKSPWe5CzseXpSRymZgxjDCP5h2gJGIiZrz51Bw48J2/mH 466Q== X-Gm-Message-State: AMke39nWMKlDaw3JILMF+hyXIVanhd49Evokx+t4oHZkqfgRTmeqEwwE/0r572BxfQyh6Rgtfl4YNjQ0txG2dA== X-Received: by 10.107.203.7 with SMTP id b7mr17401751iog.115.1488840127065; Mon, 06 Mar 2017 14:42:07 -0800 (PST) MIME-Version: 1.0 Received: by 10.36.220.1 with HTTP; Mon, 6 Mar 2017 14:42:06 -0800 (PST) In-Reply-To: <34C37303-1075-4DD5-8E83-248D2F5AE3F8@slac.stanford.edu> References: <0040A58A-7EC4-4FDD-BEC7-1A5C79BBAEFE@jordanzimmerman.com> <34C37303-1075-4DD5-8E83-248D2F5AE3F8@slac.stanford.edu> From: Le Cyberian Date: Mon, 6 Mar 2017 23:42:06 +0100 Message-ID: Subject: Re: Zookeeper Cross Datacenter Cluster To: "Van Klaveren, Brian N." Content-Type: multipart/alternative; boundary=94eb2c1c50dee6c1f3054a179a3c archived-at: Mon, 06 Mar 2017 22:42:19 -0000 --94eb2c1c50dee6c1f3054a179a3c Content-Type: text/plain; charset=UTF-8 Thank for your replies. I see, in that way then failover won't be possible as if the main/major DC is down. Is there something i can do using hierarchical quorum groups or observers ? I am quiet confused to achieve this setup somehow, as how someone would build a failover over 2 dc's :-/ On Mon, Mar 6, 2017 at 11:08 PM, Van Klaveren, Brian N. < bvan@slac.stanford.edu> wrote: > Or, an extra server in a third datacenter to break the tie. > > Brian > > > > On Mar 6, 2017, at 2:05 PM, Jordan Zimmerman > wrote: > > > > This scenario is not possible. An even number of servers doesn't help > you as a quorum in ZooKeeper is (N/2)+1. So, if you put 2 servers in each > DC, a network partition would disable BOTH DCs. (4/2)+1 == 3. So, the only > option is to choose one of the DCs as the main DC and put an extra server > there. This way you could lose the "minor" DC. > > > > -Jordan > > > >> On Mar 6, 2017, at 11:34 AM, Le Cyberian wrote: > >> > >> Hi Guys, > >> > >> I would like to have Kafka Cluster which is dependent on Zookeeper like > >> many other great projects out their. Kafka would have 4 nodes as its > >> cluster since their having even number is not a problem because it does > >> leader election based its config it keeps in zookeeper. > >> > >> My scenario is having two server rooms on different floors having fiber > >> connectivity with each other so network latency / lag is almost like a > >> local connection. > >> > >> However, I would like to implement zookeeper cluster between both of > them > >> for auto-failover / redundancy purposes. So if one server room is down > >> other should work properly. > >> > >> For example: on a 5 node zookeeper cluster distributed on both of the > >> server rooms with 3 in one and 2 in other would give me 2 node failure > >> tolerance, However if the server room which has 3 servers is down then > >> ensemble would be broken and split brain. > >> > >> Can you please suggest how to achieve this to have failover / redundancy > >> between two server rooms or two locations lets say ? > >> > >> Is it possible to run 3 node cluster in each server room and have some > sort > >> of master-master between them ? > >> > >> Thank for your time and helping out in advance. > >> > >> Kind regards, > >> > >> Le > > > > --94eb2c1c50dee6c1f3054a179a3c--