incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric tamme <>
Subject Re: Docs: Token Selection
Date Fri, 17 Jun 2011 18:33:22 GMT
> Yes.  But, the more I think about it, the more I see issues.  Here is what I
> envision (Issues marked with *):
> Three or more dc's, each serving as fail-overs for the others with 1 maximum
> unavailable dc supported at a time.
> Each dc is a production dc serving users that I choose.
> Each dc also stores 0-1 replicas from the other dc's.
> Direct customers to their "home" dc of my choice.
> Data coming from the client local to the dc is replicated X times in the
> local dc and 1 time in any other dc (randomly).

There is no "random" placement with NTS.  Make sure each DC has a
complete, evenly distributed token range coverage for the nodes in
that DC. Use propertyFileSnitch and specify replication for each

> In the even a dc becomes unreachable by users, an arbitrary fail-over dc can
> serve their requests albeit with increased latency.
> *There will only be 1 replica left amongst the remaining fail-over dc's, so
> this could be a problem depending on the CL used other than CL.ONE.

So ... increase the RF per DC.. either you want 1 replica or you want
more .. you cant have it both ways.

> *During the fail-over state, the cluster needs to know that the real "home"
> of the replicas belongs to the currently unavailable dc.

You are introducing concepts that dont exist in cassandra.  There is
no "real home" and by trying to make cassandra aware of where home is
for some set of clients, you are subverting the completely distributed

>But, as of now, I
> don't think that's possible and so new writes will start to be replicated in
> the current dc as if the currently-used fail-over dc is the home dc.

It will behave exactly as it did when clients were in the other data
center, and will insert data locally if possible, and replicate
according to your placement strategy.

> Maybe these goals can be achieve with a kind of ordered asymmetrical
> replication strategy like you illustrated above.  The hard part will be to
> figure out a simple and elegant way to do this w/o undermining C*.

As i said previously, trying to build make cassandra treat things
differently based on some kind of persistent locality set it maintains
in memory .. or whatever .. sounds like you will be absolutely
undermining the core principles of how cassandra works.


View raw message