incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David McNelis <dmcne...@agentisenergy.com>
Subject balancing issue with Random partitioner
Date Mon, 12 Sep 2011 20:59:55 GMT
We are running the datastax .8 rpm distro.  We have a situation where we
have 4 nodes and each owns 25% of the keys.  However the last node in the
ring does not seem to be  getting much of a load at all.

We are using the random partitioner, we have a total of about 20k keys that
are sequential...

Our nodetool ring  output is currently:

Address         DC          Rack        Status State   Load            Owns
   Token

   127605887595351923798765477786913079296
10.181.138.167  datacenter1 rack1       Up     Normal  99.37 GB
 25.00%  0
192.168.100.6   datacenter1 rack1       Up     Normal  106.25 GB
25.00%  42535295865117307932921825928971026432
10.181.137.37   datacenter1 rack1       Up     Normal  77.7 GB
25.00%  85070591730234615865843651857942052863
192.168.100.5   datacenter1 rack1       Up     Normal  494.67 KB
25.00%  127605887595351923798765477786913079296


Nothing is running on netstats on .37 or .5.

I understand that the nature of the beast would cause the load to differ
between the nodes...but I wouldn't expect it to be so drastic.  We had the
token for .37 set to 85070591730234615865843651857942052864, and I
decremented and moved it to try to kickstart some streaming on the thought
that something may have failed, but that didn't yield any appreciable
results.

Are we seeing completely abnormal behavior?  Should I consider making the
token for the fourth node considerably smaller?  We calculated the node's
tokens using the standard python script.

-- 
*David McNelis*
Lead Software Engineer
Agentis Energy
www.agentisenergy.com
o: 630.359.6395
c: 219.384.5143

*A Smart Grid technology company focused on helping consumers of energy
control an often under-managed resource.*

Mime
View raw message