Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6CA83C7DD for ; Wed, 20 Jun 2012 04:12:18 +0000 (UTC) Received: (qmail 42312 invoked by uid 500); 20 Jun 2012 04:12:16 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 42129 invoked by uid 500); 20 Jun 2012 04:12:15 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 42098 invoked by uid 99); 20 Jun 2012 04:12:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Jun 2012 04:12:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of raj.cassandra@gmail.com designates 209.85.214.172 as permitted sender) Received: from [209.85.214.172] (HELO mail-ob0-f172.google.com) (209.85.214.172) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Jun 2012 04:12:08 +0000 Received: by obbwc20 with SMTP id wc20so11233872obb.31 for ; Tue, 19 Jun 2012 21:11:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=23qI87PEKIA3FON7RuR+uedl7XTcsY/c1NMkO0yZA4U=; b=fwocF8ExYF72FXDDcOyOvfEJPCKSQjfWxlPAXC2cv+hxI/wC/+ceSpgplZ5QnRF7S2 rOhyRQPNMjEWiu7nHMLZGxd0+k8TgYEy+qhC3wU6ygJnRwWycF8fSATi0Y4czoquYQQB Gag0P15UBusPjemd7urazqroq8LK0HJKgzkbXM59/pvipwbRg1KCHNJGczQBUpH0vQ9U 3S0Fm76RUKKSxAMLVzRJ/fQBEEr7eTgCdjDRDPElOGR6HrPo9+uGhcH5dA57D+ilway6 L5IG0fOUgXsV9KGtkxY3ccGjWn9VNCmFK9/YmZZfGfgMaZQ+N7rgNIq4rNS0Akg2CcK+ zrjg== MIME-Version: 1.0 Received: by 10.60.20.233 with SMTP id q9mr1782592oee.57.1340165507628; Tue, 19 Jun 2012 21:11:47 -0700 (PDT) Received: by 10.76.70.131 with HTTP; Tue, 19 Jun 2012 21:11:47 -0700 (PDT) In-Reply-To: References: Date: Wed, 20 Jun 2012 00:11:47 -0400 Message-ID: Subject: Re: Unbalanced ring in Cassandra 0.8.4 From: Raj N To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=e89a8ff1c75a05bf0c04c2df9997 --e89a8ff1c75a05bf0c04c2df9997 Content-Type: text/plain; charset=ISO-8859-1 But wont that also run a major compaction which is not recommended anymore. -Raj On Sun, Jun 17, 2012 at 11:58 PM, aaron morton wrote: > Assuming you have been running repair, it' can't hurt. > > Cheers > > ----------------- > Aaron Morton > Freelance Developer > @aaronmorton > http://www.thelastpickle.com > > On 17/06/2012, at 4:06 AM, Raj N wrote: > > Nick, do you think I should still run cleanup on the first node. > > -Rajesh > > On Fri, Jun 15, 2012 at 3:47 PM, Raj N wrote: > >> I did run nodetool move. But that was when I was setting up the cluster >> which means I didn't have any data at that time. >> >> -Raj >> >> >> On Fri, Jun 15, 2012 at 1:29 PM, Nick Bailey wrote: >> >>> Did you start all your nodes at the correct tokens or did you balance >>> by moving them? Moving nodes around won't delete unneeded data after >>> the move is done. >>> >>> Try running 'nodetool cleanup' on all of your nodes. >>> >>> On Fri, Jun 15, 2012 at 12:24 PM, Raj N wrote: >>> > Actually I am not worried about the percentage. Its the data I am >>> concerned >>> > about. Look at the first node. It has 102.07GB data. And the other >>> nodes >>> > have around 60 GB(one has 69, but lets ignore that one). I am not >>> > understanding why the first node has almost double the data. >>> > >>> > Thanks >>> > -Raj >>> > >>> > >>> > On Fri, Jun 15, 2012 at 11:06 AM, Nick Bailey >>> wrote: >>> >> >>> >> This is just a known problem with the nodetool output and multiple >>> >> DCs. Your configuration is correct. The problem with nodetool is fixed >>> >> in 1.1.1 >>> >> >>> >> https://issues.apache.org/jira/browse/CASSANDRA-3412 >>> >> >>> >> On Fri, Jun 15, 2012 at 9:59 AM, Raj N >>> wrote: >>> >> > Hi experts, >>> >> > I have a 6 node cluster across 2 DCs(DC1:3, DC2:3). I have >>> assigned >>> >> > tokens using the first strategy(adding 1) mentioned here - >>> >> > >>> >> > http://wiki.apache.org/cassandra/Operations?#Token_selection >>> >> > >>> >> > But when I run nodetool ring on my cluster, this is the result I >>> get - >>> >> > >>> >> > Address DC Rack Status State Load Owns Token >>> >> > >>> >> > 113427455640312814857969558651062452225 >>> >> > 172.17.72.91 DC1 RAC13 Up Normal 102.07 GB 33.33% 0 >>> >> > 45.10.80.144 DC2 RAC5 Up Normal 59.1 GB 0.00% 1 >>> >> > 172.17.72.93 DC1 RAC18 Up Normal 59.57 GB 33.33% >>> >> > 56713727820156407428984779325531226112 >>> >> > 45.10.80.146 DC2 RAC7 Up Normal 59.64 GB 0.00% >>> >> > 56713727820156407428984779325531226113 >>> >> > 172.17.72.95 DC1 RAC19 Up Normal 69.58 GB 33.33% >>> >> > 113427455640312814857969558651062452224 >>> >> > 45.10.80.148 DC2 RAC9 Up Normal 59.31 GB 0.00% >>> >> > 113427455640312814857969558651062452225 >>> >> > >>> >> > >>> >> > As you can see the first node has considerably more load than the >>> >> > others(almost double) which is surprising since all these are >>> replicas >>> >> > of >>> >> > each other. I am running Cassandra 0.8.4. Is there an explanation >>> for >>> >> > this >>> >> > behaviour? Could >>> https://issues.apache.org/jira/browse/CASSANDRA-2433 be >>> >> > the >>> >> > cause for this? >>> >> > >>> >> > Thanks >>> >> > -Raj >>> > >>> > >>> >> >> > > --e89a8ff1c75a05bf0c04c2df9997 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable But wont that also run a major compaction which is not recommended anymore.=

-Raj

On Sun, Jun 17, = 2012 at 11:58 PM, aaron morton <aaron@thelastpickle.com> wrote:
Assuming= you have been running repair, it' can't hurt.=A0

Cheers

<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 17/06/2012, at 4:06 AM, Raj N wrote:

Nick, do you think I should still run cleanup on the first no= de.

-Rajesh

On Fri, Ju= n 15, 2012 at 3:47 PM, Raj N <raj.cassandra@gmail.com>= wrote:
I did run nodetool move. But that was when I= was setting up the cluster which means I didn't have any data at that = time.

-Raj


On Fri, Jun 15, 2012 at 1:29= PM, Nick Bailey <nick@datastax.com> wrote:
Did you start all your nodes at the correct = tokens or did you balance
by moving them? Moving nodes around won't delete unneeded data after the move is done.

Try running 'nodetool cleanup' on all of your nodes.

On Fri, Jun 15, 2012 at 12:24 PM, Raj N <raj.cassandra@gmail.com> wrote:
> Actually I am not worried about the percentage. Its the data I am conc= erned
> about. Look at the first node. It has 102.07GB data. And the other nod= es
> have around 60 GB(one has 69, but lets ignore that one). I am not
> understanding why the first node has almost double the data.
>
> Thanks
> -Raj
>
>
> On Fri, Jun 15, 2012 at 11:06 AM, Nick Bailey <nick@datastax.com> wrote:
>>
>> This is just a known problem with the nodetool output and multiple=
>> DCs. Your configuration is correct. The problem with nodetool is f= ixed
>> in 1.1.1
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-3412 >>
>> On Fri, Jun 15, 2012 at 9:59 AM, Raj N <raj.cassandra@gmail.com> wrote= :
>> > Hi experts,
>> > =A0 =A0 I have a 6 node cluster across 2 DCs(DC1:3, DC2:3). I= have assigned
>> > tokens using the first strategy(adding 1) mentioned here - >> >
>> > http://wiki.apache.org/cassandra/Operations?#= Token_selection
>> >
>> > But when I run nodetool ring on my cluster, this is the resul= t I get -
>> >
>> > Address =A0 =A0 =A0 =A0 DC =A0Rack =A0Status State =A0 Load = =A0 =A0 =A0 =A0Owns =A0 =A0Token
>> >
>> > =A0113427455640312814857969558651062452225
>> > 172.17.72.91 =A0 =A0DC1 RAC13 Up =A0 =A0 Normal =A0102.07 GB = =A0 33.33% =A00
>> > 45.10.80.144 =A0 =A0DC2 RAC5 =A0Up =A0 =A0 Normal =A059.1 GB = =A0 =A0 0.00% =A0 1
>> > 172.17.72.93 =A0 =A0DC1 RAC18 Up =A0 =A0 Normal =A059.57 GB = =A0 =A033.33%
>> > =A056713727820156407428984779325531226112
>> > 45.10.80.146 =A0 =A0DC2 RAC7 =A0Up =A0 =A0 Normal =A059.64 GB= =A0 =A00.00%
>> > 56713727820156407428984779325531226113
>> > 172.17.72.95 =A0 =A0DC1 RAC19 Up =A0 =A0 Normal =A069.58 GB = =A0 =A033.33%
>> > =A0113427455640312814857969558651062452224
>> > 45.10.80.148 =A0 =A0DC2 RAC9 =A0Up =A0 =A0 Normal =A059.31 GB= =A0 =A00.00%
>> > 113427455640312814857969558651062452225
>> >
>> >
>> > As you can see the first node has considerably more load than= the
>> > others(almost double)=A0which is surprising since all these a= re replicas
>> > of
>> > each other. I am running Cassandra 0.8.4. Is there an explana= tion for
>> > this
>> > behaviour? Could=A0https://issues.apache.org/jira/bro= wse/CASSANDRA-2433=A0be
>> > the
>> > cause for this?
>> >
>> > Thanks
>> > -Raj
>
>




--e89a8ff1c75a05bf0c04c2df9997--