incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject Brisk Unbalanced Ring
Date Tue, 19 Jul 2011 00:23:37 GMT
We're running brisk v1 beta2 on 12 nodes - 8 cassandra in DC1 and 4 brisk in DC 2 in EC2. Wrote
a few TBs of data to the cluster, and unfortunately the load is very unbalanced. Every key
is the same size and we are using RandomPartitioner.

There are two replicas of data in DC1 and one replica in DC2. The load amount in DC2 makes
sense (about 250GB per node). DC1 should also have about 250GB per node (since there is twice
the data and twice the number of nodes), but as can be seen below two nodes have an inordinate
amount of data and the other 6 have only about 128GB:

Address         DC          Rack        Status State   Load            Owns    Token
                                                                               148873535527910577765226390751398592512    DC1         RAC1        Up     Normal  901.6 GB        12.50%  0  DC2         RAC1        Up     Normal  258.23 GB       6.25%   10633823966279326983230456482242756608    DC1         RAC1        Up     Normal  129.08 GB       6.25%   21267647932558653966460912964485513216      DC1         RAC1        Up     Normal  128.51 GB       12.50%  42535295865117307932921825928971026432   DC2         RAC1        Up     Normal  257.32 GB       6.25%   53169119831396634916152282411213783040   DC1         RAC1        Up     Normal  128.67 GB       6.25%   63802943797675961899382738893456539648   DC1         RAC2        Up     Normal  643.14 GB       12.50%  85070591730234615865843651857942052864    DC2         RAC1        Up     Normal  256.78 GB       6.25%   95704415696513942849074108340184809472    DC1         RAC2        Up     Normal  128.96 GB       6.25%   106338239662793269832304564822427566080    DC1         RAC2        Up     Normal  128.3 GB        12.50%  127605887595351923798765477786913079296  DC2         RAC1        Up     Normal  257.15 GB       6.25%   138239711561631250781995934269155835904   DC1         RAC2        Up     Normal  129.46 GB       6.25%   148873535527910577765226390751398592512

I should also node that the first node used to have 640GB of load until the instance went
down and we needed to run repair on a new instance in its place.

Any ideas why this may have happened?


This message is for the designated recipient only and may contain privileged, proprietary,
or otherwise private information. If you have received it in error, please notify the sender
immediately and delete the original. Any other use of the email by you is prohibited.

View raw message