Yes, I read the same and it sounded weird.

"Note that with RackAwareStrategy, succeeding nodes along the ring should alternate data centers to avoid hot spots. For instance, if you have nodes A, B, C, and D in increasing Token order, and instead of alternating you place A and B in DC1, and C and D in DC2, then nodes C and A will have disproportionately more data on them because they will be the replica destination for every Token range in the other data center."

I hope there is a way where the replica of 1st node of DC1 is 1st node of DC2
replica of 2nd node of DC1 is 2nd node of DC2 and so on....

Also hope there is a way where replica of 1st node of Rack1in DC1 is 1st node of Rack2DC1
replica of 2nd node of Rack1inDC1 is 2nd node of Rack2DC1 and so on.....

Please advise if this will not be possible.

2011/2/16 Wangpei (Peter) <>

I have same question.

I read the source code of NetworkTopologyStrategy, seems it always put replica on the first nodes on the ring of the DC.

If I am misunderstand, It seems those nodes will became hot spot.

Why NetworkTopologyStrategy works that way? is there some alternative can avoid this shortcoming?


Thanks in advance.




发件人: Aaron Morton []
发送时间: 2011216 3:56
主题: Re: Partitioning


You can using the Network Topology Strategy see 


and NetworkTopologyStrategy in the  conf/cassandra.yaml file. 


You can control the number of replicas to each DC.


Also look at conf/ for information on how to tell cassandra about your network topology.



On 16 Feb, 2011,at 05:10 AM, "RW>N" <> wrote:

I am new to Cassandra and am evaluating it.

Following diagram is how my setup will be:
Here each oval represents one data center. I want to keep N=4. i.e. four
copies of every Column Family. I want one copy in each data-center. In
other words, COMPLETE database must be contained in each of the data

1. Is this possible ? If so, how do I configure (partitioner, replica etc) ?



P.S excuse my multiple posting of the same. I am unable to subscribe for
some reason.
View this message in context:
Sent from the mailing list archive at