incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raj N <raj.cassan...@gmail.com>
Subject Re: Unbalanced ring in Cassandra 0.8.4
Date Sat, 16 Jun 2012 16:06:33 GMT
Nick, do you think I should still run cleanup on the first node.

-Rajesh

On Fri, Jun 15, 2012 at 3:47 PM, Raj N <raj.cassandra@gmail.com> wrote:

> I did run nodetool move. But that was when I was setting up the cluster
> which means I didn't have any data at that time.
>
> -Raj
>
>
> On Fri, Jun 15, 2012 at 1:29 PM, Nick Bailey <nick@datastax.com> wrote:
>
>> Did you start all your nodes at the correct tokens or did you balance
>> by moving them? Moving nodes around won't delete unneeded data after
>> the move is done.
>>
>> Try running 'nodetool cleanup' on all of your nodes.
>>
>> On Fri, Jun 15, 2012 at 12:24 PM, Raj N <raj.cassandra@gmail.com> wrote:
>> > Actually I am not worried about the percentage. Its the data I am
>> concerned
>> > about. Look at the first node. It has 102.07GB data. And the other nodes
>> > have around 60 GB(one has 69, but lets ignore that one). I am not
>> > understanding why the first node has almost double the data.
>> >
>> > Thanks
>> > -Raj
>> >
>> >
>> > On Fri, Jun 15, 2012 at 11:06 AM, Nick Bailey <nick@datastax.com>
>> wrote:
>> >>
>> >> This is just a known problem with the nodetool output and multiple
>> >> DCs. Your configuration is correct. The problem with nodetool is fixed
>> >> in 1.1.1
>> >>
>> >> https://issues.apache.org/jira/browse/CASSANDRA-3412
>> >>
>> >> On Fri, Jun 15, 2012 at 9:59 AM, Raj N <raj.cassandra@gmail.com>
>> wrote:
>> >> > Hi experts,
>> >> >     I have a 6 node cluster across 2 DCs(DC1:3, DC2:3). I have
>> assigned
>> >> > tokens using the first strategy(adding 1) mentioned here -
>> >> >
>> >> > http://wiki.apache.org/cassandra/Operations?#Token_selection
>> >> >
>> >> > But when I run nodetool ring on my cluster, this is the result I get
>> -
>> >> >
>> >> > Address         DC  Rack  Status State   Load        Owns    Token
>> >> >
>> >> >  113427455640312814857969558651062452225
>> >> > 172.17.72.91    DC1 RAC13 Up     Normal  102.07 GB   33.33%  0
>> >> > 45.10.80.144    DC2 RAC5  Up     Normal  59.1 GB     0.00%   1
>> >> > 172.17.72.93    DC1 RAC18 Up     Normal  59.57 GB    33.33%
>> >> >  56713727820156407428984779325531226112
>> >> > 45.10.80.146    DC2 RAC7  Up     Normal  59.64 GB    0.00%
>> >> > 56713727820156407428984779325531226113
>> >> > 172.17.72.95    DC1 RAC19 Up     Normal  69.58 GB    33.33%
>> >> >  113427455640312814857969558651062452224
>> >> > 45.10.80.148    DC2 RAC9  Up     Normal  59.31 GB    0.00%
>> >> > 113427455640312814857969558651062452225
>> >> >
>> >> >
>> >> > As you can see the first node has considerably more load than the
>> >> > others(almost double) which is surprising since all these are
>> replicas
>> >> > of
>> >> > each other. I am running Cassandra 0.8.4. Is there an explanation for
>> >> > this
>> >> > behaviour? Could
>> https://issues.apache.org/jira/browse/CASSANDRA-2433 be
>> >> > the
>> >> > cause for this?
>> >> >
>> >> > Thanks
>> >> > -Raj
>> >
>> >
>>
>
>

Mime
View raw message