cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Tsvinev <oleg.tsvi...@gmail.com>
Subject Re: HUnavailableException: : May not be enough replicas present to handle consistency level.
Date Fri, 02 Sep 2011 21:10:17 GMT
Yes, I think I get it now. "quorum of replicas" != "quorum of nodes"
and I don't think quorum of nodes is ever defined. Thank you,
Konstantin.

Now, I believe I need to change my cluster to store data in two
remaining nodes in DC1, keeping 3 nodes in DC2. I believe nodetool
removetoken is what I need to use. Anything else I can/should do?

On Fri, Sep 2, 2011 at 1:56 PM, Konstantin  Naryshkin
<konstantinn@a-bb.net> wrote:
> I think that Oleg may have misunderstood how replicas are selected. If you have 3 nodes
in your cluster and a RF of 2, Cassandra first selects what two nodes, out of the 3 will get
data, then, and only then does it write it out. The selection is based on the row key, the
token of the node, and you choice of partitioner. This means that Cassandra does not need
to store what node is responsible for a given row. That information can be recalculated whenever
it is needed.
>
> The error that you are getting is because you may have 2 nodes up, those are not the
nodes that Cassandra will use to store data.
>
> ----- Original Message -----
> From: "Nate McCall" <nate@datastax.com>
> To: hector-users@googlegroups.com
> Cc: "Cassandra Users" <user@cassandra.apache.org>
> Sent: Friday, September 2, 2011 4:44:01 PM
> Subject: Re: HUnavailableException: : May not be enough replicas present to handle consistency
level.
>
> In your options, you have configured 2 replicas for each data center:
> Options: [DC2:2, DC1:2]
>
> If one of those replicas is down, then LOCAL_QUORUM will fail as there
> is only one replica left 'locally.'
>
>
> On Fri, Sep 2, 2011 at 3:35 PM, Oleg Tsvinev <oleg.tsvinev@gmail.com> wrote:
>> from http://www.datastax.com/docs/0.8/consistency/index:
>>
>> <A “quorum” of replicas is essentially a majority of replicas, or RF /
>> 2 + 1 with any resulting fractions rounded down.>
>>
>> I have RF=2, so majority of replicas is 2/2+1=2 which I have after 3rd
>> node goes down?
>>
>> On Fri, Sep 2, 2011 at 1:22 PM, Nate McCall <nate@datastax.com> wrote:
>>> It looks like you only have 2 replicas configured in each data center?
>>>
>>> If so, LOCAL_QUORUM cannot be achieved with a host down same as with
>>> QUORUM on RF=2 in a single DC cluster.
>>>
>>> On Fri, Sep 2, 2011 at 1:40 PM, Oleg Tsvinev <oleg.tsvinev@gmail.com> wrote:
>>>> I believe I don't quite understand semantics of this exception:
>>>>
>>>> me.prettyprint.hector.api.exceptions.HUnavailableException: : May not
>>>> be enough replicas present to handle consistency level.
>>>>
>>>> Does it mean there *might be* enough?
>>>> Does it mean there *is not* enough?
>>>>
>>>> My case is as following - I have 3 nodes with keyspaces configured as following:
>>>>
>>>> Replication Strategy: org.apache.cassandra.locator.NetworkTopologyStrategy
>>>> Durable Writes: true
>>>> Options: [DC2:2, DC1:2]
>>>>
>>>> Hector can only connect to nodes in DC1 and configured to neither see
>>>> nor connect to nodes in DC2. This is for replication by Cassandra
>>>> means, asynchronously between datacenters DC1 and DC2. Each of 6 total
>>>> nodes can see any of the remaining 5.
>>>>
>>>> and inserts with LOCAL_QUORUM CL work fine when all 3 nodes are up.
>>>> However, this morning one node went down and I started seeing the
>>>> HUnavailableException: : May not be enough replicas present to handle
>>>> consistency level.
>>>>
>>>> I believed if I have 3 nodes and one goes down, two remaining nodes
>>>> are sufficient for my configuration.
>>>>
>>>> Please help me to understand what's going on.
>>>>
>>>
>>
>

Mime
View raw message