Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 58AAC7E92 for ; Fri, 2 Sep 2011 20:55:52 +0000 (UTC) Received: (qmail 51385 invoked by uid 500); 2 Sep 2011 20:55:50 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 51342 invoked by uid 500); 2 Sep 2011 20:55:49 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 51334 invoked by uid 99); 2 Sep 2011 20:55:49 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 02 Sep 2011 20:55:49 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of nate@datastax.com designates 209.85.216.47 as permitted sender) Received: from [209.85.216.47] (HELO mail-qw0-f47.google.com) (209.85.216.47) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 02 Sep 2011 20:55:42 +0000 Received: by qwh5 with SMTP id 5so2579935qwh.34 for ; Fri, 02 Sep 2011 13:55:21 -0700 (PDT) MIME-Version: 1.0 Received: by 10.229.63.15 with SMTP id z15mr1154082qch.60.1314996920904; Fri, 02 Sep 2011 13:55:20 -0700 (PDT) Received: by 10.229.191.75 with HTTP; Fri, 2 Sep 2011 13:55:20 -0700 (PDT) X-Originating-IP: [64.132.24.216] In-Reply-To: References: Date: Fri, 2 Sep 2011 15:55:20 -0500 Message-ID: Subject: Re: HUnavailableException: : May not be enough replicas present to handle consistency level. From: Nate McCall To: hector-users@googlegroups.com Cc: user@cassandra.apache.org Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org Yes - you would need at least 3 replicas per data center to use LOCAL_QUORUM and survive a node failure. On Fri, Sep 2, 2011 at 3:51 PM, Oleg Tsvinev wrote= : > Do you mean I need to configure 3 replicas in each DC and keep using > LOCAL_QUORUM? In which case, if I'm following your logic, even one of > the 3 goes down I'll still have 2 to ensure LOCAL_QUORUM succeeds? > > On Fri, Sep 2, 2011 at 1:44 PM, Nate McCall wrote: >> In your options, you have configured 2 replicas for each data center: >> Options: [DC2:2, DC1:2] >> >> If one of those replicas is down, then LOCAL_QUORUM will fail as there >> is only one replica left 'locally.' >> >> >> On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev wr= ote: >>> from http://www.datastax.com/docs/0.8/consistency/index: >>> >>> >> 2 + 1 with any resulting fractions rounded down.> >>> >>> I have RF=3D2, so majority of replicas is 2/2+1=3D2 which I have after = 3rd >>> node goes down? >>> >>> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall wrote: >>>> It looks like you only have 2 replicas configured in each data center? >>>> >>>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with >>>> QUORUM on RF=3D2 in a single DC cluster. >>>> >>>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev = wrote: >>>>> I believe I don't quite understand semantics of this exception: >>>>> >>>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not >>>>> be enough replicas present to handle consistency level. >>>>> >>>>> Does it mean there *might be* enough? >>>>> Does it mean there *is not* enough? >>>>> >>>>> My case is as following - I have 3 nodes with keyspaces configured as= following: >>>>> >>>>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStr= ategy >>>>> Durable Writes: true >>>>> Options: [DC2:2, DC1:2] >>>>> >>>>> Hector can only connect to nodes in DC1 and configured to neither see >>>>> nor connect to nodes in DC2. This is for replication by Cassandra >>>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 tota= l >>>>> nodes can see any of the remaining 5. >>>>> >>>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up. >>>>> However, this morning one node went down and I started seeing the >>>>> HUnavailableException: : May not be enough replicas present to handle >>>>> consistency level. >>>>> >>>>> I believed if I have 3 nodes and one goes down, two remaining nodes >>>>> are sufficient for my configuration. >>>>> >>>>> Please help me to understand what's going on. >>>>> >>>> >>> >> >