incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: unbalanced ring
Date Tue, 05 Feb 2013 20:41:05 GMT
Use nodetool status with vnodes http://www.datastax.com/dev/blog/upgrading-an-existing-cluster-to-vnodes

The different load can be caused by rack affinity, are all the nodes in the same rack ? Another
simple check is have you created some very big rows?
Cheers

-----------------
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 6/02/2013, at 8:40 AM, Stephen.M.Thompson@wellsfargo.com wrote:

> So I have three nodes in a ring in one data center.  My configuration has num_tokens:
256 set andinitial_token commented out.  When I look at the ring, it shows me all of the token
ranges of course, and basically identical data for each range on each node.  Here is the Cliff’s
Notes version of what I see:
>  
> [root@Config3482VM2 apache-cassandra-1.2.0]# bin/nodetool ring
>  
> Datacenter: 28
> ==========
> Replicas: 1
>  
> Address         Rack        Status State   Load            Owns                Token
>                                                                                9187343239835811839
> 10.28.205.125   205         Up     Normal  2.85 GB         33.69%              -3026347817059713363
> 10.28.205.125   205         Up     Normal  2.85 GB         33.69%              -3026276684526453414
> 10.28.205.125   205         Up     Normal  2.85 GB         33.69%              -3026205551993193465
>   (etc)
> 10.28.205.126   205         Up     Normal  1.15 GB         100.00%             -9187343239835811840
> 10.28.205.126   205         Up     Normal  1.15 GB         100.00%             -9151314442816847872
> 10.28.205.126   205         Up     Normal  1.15 GB         100.00%             -9115285645797883904
>   (etc)
> 10.28.205.127   205         Up     Normal  69.13 KB        66.30%              -9223372036854775808
> 10.28.205.127   205         Up     Normal  69.13 KB        66.30%              36028797018963967
> 10.28.205.127   205         Up     Normal  69.13 KB        66.30%              72057594037927935
>   (etc)
>  
> So at this point I have a number of questions.   The biggest question is of Load.  Why
does the .125 node have 2.85 GB, .126 has 1.15 GB, and .127 has only 0.000069 GB?  These boxes
are all comparable and all configured identically.
>  
> partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>  
> I’m sorry to ask so many questions – I’m having a hard time finding documentation
that explains this stuff.
>  
> Stephen


Mime
View raw message