cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nate McCall <>
Subject Re: Quorum, Hector, and datacenter preference
Date Thu, 24 Mar 2011 14:05:20 GMT
We have a load balancing policy which selects the host best on latency
and uses a Phi convict algorithm in a method similar to DynamicSnitch.
Using this policy, you would inherently get the closest replica
whenever possible as that would most likely be the best performing.

This policy is still in trunk and 0.7.0 tip. We should have a new
release out containing the above in the next few days.

On Thu, Mar 24, 2011 at 8:46 AM, Jonathan Colby
<> wrote:
> Indeed I found the big flaw in my own logic.   Even writing to the "local" cassandra
nodes does not guarantee where the replicas will end up.   The decision where to write the
first replica is based on the token ring, which is spread out on all nodes regardless of datacenter.
  right ?
> On Mar 24, 2011, at 2:02 PM, Jonathan Colby wrote:
>> Hi -
>> Our cluster is spread between 2 datacenters.   We have a straight-forward IP assignment
so that OldNetworkTopology (rackinferring snitch) works well.    We have cassandra clients
written in Hector in each of those data centers.   The Hector clients all have a list of
all cassandra nodes across both data centers.  RF=3.
>> Is there an order as to which data center gets the first write?    In other words,
would (or can) the Hector client do its first write to the cassandra nodes in its own data
>> It would be ideal it Hector chose the "local" cassandra nodes.  That way, if one
data center is unreachable, the Quorum of replicas in cassandra is still reached (because
it was written to the working data center first).
>> Otherwise, if the cassandra writes are really random from the Hector client point-of-view,
a data center outage would result in a read failure for any data that has 2 replicas in the
lost data center.
>> Is anyone doing this?  Is there a flaw in my logic?

View raw message