Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 7C080200CB4 for ; Tue, 13 Jun 2017 00:25:01 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 7A880160BDE; Mon, 12 Jun 2017 22:25:01 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 24D43160BD9 for ; Tue, 13 Jun 2017 00:24:59 +0200 (CEST) Received: (qmail 16049 invoked by uid 500); 12 Jun 2017 22:24:58 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 16039 invoked by uid 99); 12 Jun 2017 22:24:58 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Jun 2017 22:24:58 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 1AF131804F7 for ; Mon, 12 Jun 2017 22:24:58 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.397 X-Spam-Level: X-Spam-Status: No, score=-0.397 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-2.796, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id QxqOaXMvYEd6 for ; Mon, 12 Jun 2017 22:24:55 +0000 (UTC) Received: from mail-pf0-f169.google.com (mail-pf0-f169.google.com [209.85.192.169]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id E93CB5FD21 for ; Mon, 12 Jun 2017 22:24:54 +0000 (UTC) Received: by mail-pf0-f169.google.com with SMTP id x63so57351936pff.3 for ; Mon, 12 Jun 2017 15:24:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=ZhO7rexO17TSdj9QWLvlHSlIE5B7eiDzvTOhqWFCdOA=; b=fNuS8/ynFh/jNG3ccmFUOVBgboN476Kf6Br2iiU/hkPKsXBGqR7ElenzVK1eoJreuu wn8LAUwZuSloi5cwEEgdf6CofpsErlq1SeVSJdzSbP5cwjfxvePI1XKm+OO62LCfOjhZ rfYJIf9Y6X32Y0OuzXdlFOvEvKzawYRkuiucy/OdqlBHFvqZkd5U8YC1LJJ/sDyDwkA8 x1i9J0CzmCXWKXwVEyxyxDZbPATTBbmSNrr5EJeU7uXB0C1P7TbUeRslR5bZZm+tKgUT Q6MpbRl+ZSjhFLaitT7JIR8bdQ7yg/vzsF5cNqcV4oTd7qF2DpJ+g6QLWHsRj6RTLJf8 qQZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=ZhO7rexO17TSdj9QWLvlHSlIE5B7eiDzvTOhqWFCdOA=; b=idHhz9p6A/HjEEsAN/6FnNI3l0WJXpT/kNnN4eYe9QBhRy2HxVw46+qt5/0FALRze5 HtP6CsRfc0L+CHYoYZymA8Ek2RrgW5Z8uPrPOE1QTKohN0z5FPHGqCFrzOgNZPpURgqJ SeRwC7W4fBw7T1bjnza/ccc8wBr8iRqKB96Q/E7AxPHcdzJ6LuYtdrVtrYE7gDVbmFYB NTWCv/GkWeTtxp7wj3hlWQ+fFIf9ahlASAbrCE+jXD0O9TKkjpQDrBkQuvgjp0eeHTyy T47/qRu8bRFyqR1NXy6AU119THxfRLU7pEBR0z/7DoJbEfBa6P8PC0HwA+Xe2UoRYigs MDsQ== X-Gm-Message-State: AODbwcDCDInQGvyEroc4hKGjwUExYuenPh76nPhWUSYyCFY+WfXvq4wa 97S+ktWHtNniEy0jQlR3JuLqK01WBw== X-Received: by 10.84.174.3 with SMTP id q3mr58306195plb.52.1497306294038; Mon, 12 Jun 2017 15:24:54 -0700 (PDT) MIME-Version: 1.0 Received: by 10.100.191.2 with HTTP; Mon, 12 Jun 2017 15:24:53 -0700 (PDT) In-Reply-To: References: <053247A8CBB6754B8345743B8F18D68D1E293B33@MOSTLS1MSGUSRFA.ITServices.sbc.com> From: Akhil Mehra Date: Tue, 13 Jun 2017 10:24:53 +1200 Message-ID: Subject: Re: Convert single node C* to cluster (rebalancing problem) To: John Hughes Cc: Junaid Nasir , "ZAIDI, ASAD A" , Vladimir Yudovin , "user@cassandra.apache.org" Content-Type: multipart/alternative; boundary="94eb2c13e32ac6b88c0551cac9c0" archived-at: Mon, 12 Jun 2017 22:25:01 -0000 --94eb2c13e32ac6b88c0551cac9c0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Great point John. The OP should also note that data distribution also depends on your schema and incoming data profile. If your schema is not modelled correctly you can easily end up unevenly distributed data. Cheers, Akhil On Tue, Jun 13, 2017 at 3:36 AM, John Hughes wrote: > Is the OP expecting a perfect 50%/50% split? That, to my experience, is > not going to happen, it is almost always shifted from a fraction of a > percent to a couple percent. > > Datacenter: eu-west > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Status=3DUp/Down > |/ State=3DNormal/Leaving/Joining/Moving > -- Address Load Tokens Owns (effective) Host ID > Rack > UN XX.XX.XX.XX 22.71 GiB 256 47.6% > 57dafdde-2f62-467c-a8ff-c91e712f89c9 1c > UN XX.XX.XX.XX 17.17 GiB 256 51.3% > d2a65c51-087d-48de-ae1f-a41142eb148d 1b > UN XX.XX.XX.XX 26.15 GiB 256 52.4% > acf5dd34-5b81-4e5b-b7be-85a7fccd8e1c 1c > UN XX.XX.XX.XX 16.64 GiB 256 50.2% > 6c8842dd-a966-467c-a7bc-bd6269ce3e7e 1a > UN XX.XX.XX.XX 24.39 GiB 256 49.8% > fd92525d-edf2-4974-8bc5-a350a8831dfa 1a > UN XX.XX.XX.XX 23.8 GiB 256 48.7% > bdc597c0-718c-4ef6-b3ef-7785110a9923 1b > > Though maybe part of what you are experiencing can be cleared up by > repair/compaction/cleanup. Also, what are your outputs when you call out > specific keyspaces? Do the numbers get more even? > > Cheers, > > On Mon, Jun 12, 2017 at 5:22 AM Akhil Mehra wrote: > >> auto_bootstrap is true by default. Ensure its set to true. On startup >> look at your logs for your auto_bootstrap value. Look at the node >> configuration line in your log file. >> >> Akhil >> >> On Mon, Jun 12, 2017 at 6:18 PM, Junaid Nasir wrote: >> >>> No, I didn't set it (left it at default value) >>> >>> On Fri, Jun 9, 2017 at 3:18 AM, ZAIDI, ASAD A wrote: >>> >>>> Did you make sure auto_bootstrap property is indeed set to [true] when >>>> you added the node? >>>> >>>> >>>> >>>> *From:* Junaid Nasir [mailto:jnasir@an10.io] >>>> *Sent:* Monday, June 05, 2017 6:29 AM >>>> *To:* Akhil Mehra >>>> *Cc:* Vladimir Yudovin ; >>>> user@cassandra.apache.org >>>> *Subject:* Re: Convert single node C* to cluster (rebalancing problem) >>>> >>>> >>>> >>>> not evenly, i have setup a new cluster with subset of data (around >>>> 5gb). using the configuration above I am getting these results >>>> >>>> >>>> >>>> Datacenter: datacenter1 >>>> >>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>> >>>> Status=3DUp/Down >>>> >>>> |/ State=3DNormal/Leaving/Joining/Moving >>>> >>>> -- Address Load Tokens Owns (effective) Host ID = Rack >>>> >>>> UN 10.128.2.1 4.86 GiB 256 44.9% e4427611-c2= 47-42ee-9404-371e177f5f17 rack1 >>>> >>>> UN 10.128.2.10 725.03 MiB 256 55.1% 690d5620-99= d3-4ae3-aebe-8f33af54a08b rack1 >>>> >>>> is there anything else I can tweak/check to make the distribution even= ? >>>> >>>> >>>> >>>> On Sat, Jun 3, 2017 at 3:30 AM, Akhil Mehra >>>> wrote: >>>> >>>> So now the data is evenly balanced in both nodes? >>>> >>>> >>>> >>>> Refer to the following documentation to get a better understanding of >>>> the roc_address and the broadcast_rpc_address https:// >>>> www.instaclustr.com/demystifying-cassandras-broadcast_address/ >>>> . >>>> I am surprised that your node started up with rpc_broadcast_address >>>> set as this is an unsupported property. I am assuming you are using >>>> Cassandra version 3.10. >>>> >>>> >>>> >>>> >>>> >>>> Regards, >>>> >>>> Akhil >>>> >>>> >>>> >>>> On 2/06/2017, at 11:06 PM, Junaid Nasir wrote: >>>> >>>> >>>> >>>> I am able to get it working. I added a new node with following changes >>>> >>>> #rpc_address:0.0.0.0 >>>> >>>> rpc_address: 10.128.1.11 >>>> >>>> #rpc_broadcast_address:10.128.1.11 >>>> >>>> rpc_address was set to 0.0.0.0, (I ran into a problem previously >>>> regarding remote connection and made these changes >>>> https://stackoverflow.com/questions/12236898/apache- >>>> cassandra-remote-access >>>> >>>> ) >>>> >>>> >>>> >>>> should it be happening? >>>> >>>> >>>> >>>> On Thu, Jun 1, 2017 at 6:31 PM, Vladimir Yudovin >>>> wrote: >>>> >>>> Did you run "nodetool cleanup" on first node after second was >>>> bootstrapped? It should clean rows not belonging to node after tokens >>>> changed. >>>> >>>> >>>> >>>> Best regards, Vladimir Yudovin, >>>> >>>> *Winguzone >>>> >>>> - Cloud Cassandra Hosting* >>>> >>>> >>>> >>>> >>>> >>>> ---- On Wed, 31 May 2017 03:55:54 -0400 *Junaid Nasir >>> >* wrote ---- >>>> >>>> >>>> >>>> Cassandra ensure that adding or removing nodes are very easy and that >>>> load is balanced between nodes when a change is made. but it's not wor= king >>>> in my case. >>>> >>>> I have a single node C* deployment (with 270 GB of data) and want to >>>> load balance the data on multiple nodes, I followed this guide >>>> >>>> >>>> >>>> `nodetool status` shows 2 nodes but load is not balanced between them >>>> >>>> Datacenter: dc1 >>>> >>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>> >>>> Status=3DUp/Down >>>> >>>> |/ State=3DNormal/Leaving/Joining/Moving >>>> >>>> -- Address Load Tokens Owns (effective) Host ID = Rack >>>> >>>> UN 10.128.0.7 270.75 GiB 256 48.6% 1a3f6faa-4376-4= 5a8-9c20-11480ae5664c rack1 >>>> >>>> UN 10.128.0.14 414.36 KiB 256 51.4% 66a89fbf-08ba-4= b5d-9f10-55d52a199b41 rack1 >>>> >>>> I also ran 'nodetool repair' on new node but result is same. any >>>> pointers would be appreciated :) >>>> >>>> >>>> >>>> conf file of new node >>>> >>>> cluster_name: 'cluster1' >>>> >>>> - seeds: "10.128.0.7" >>>> num_tokens: 256 >>>> >>>> endpoint_snitch: GossipingPropertyFileSnitch >>>> >>>> Thanks, >>>> >>>> Junaid >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> >> --94eb2c13e32ac6b88c0551cac9c0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Great point John.

The OP should also no= te that data distribution also depends on your schema and incoming data pro= file.=C2=A0

If your schema is not modelled correct= ly you can easily end up unevenly distributed data.

Cheers,
Akhil

On Tue, Jun 13, 2017 at 3:36 AM, John Hughes <joh= nthughes@gmail.com> wrote:
=
Is the OP expecting a perfect 50%/50% split? That, to my e= xperience, is not going to happen, it is almost always shifted from a fract= ion of a percent to a couple percent.

Datacenter: e= u-west
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D
Status=3DUp/Down
|/ State=3DNormal/L= eaving/Joining/Moving
-- =C2=A0Address =C2=A0 =C2=A0 =C2=A0 = =C2=A0Load =C2=A0 =C2=A0 =C2=A0 Tokens =C2=A0 =C2=A0 =C2=A0 Owns (effective= ) =C2=A0Host ID =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Rack
UN =C2= =A0XX.XX.XX.XX =C2=A0 =C2=A022.71 GiB =C2=A0256 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A047.6% =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 57dafdde-2f62-467c-a= 8ff-c91e712f89c9 =C2=A01c
UN =C2=A0XX.XX.XX.XX=C2=A0=C2=A017= .17 GiB =C2=A0256 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A051.3% =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 d2a65c51-087d-48de-ae1f-a41142eb148d =C2=A01b=
UN =C2=A0XX.XX.XX.XX=C2=A0=C2=A026.15 GiB =C2=A0256 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A052.4% =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 acf= 5dd34-5b81-4e5b-b7be-85a7fccd8e1c =C2=A01c
UN =C2=A0XX.XX.XX= .XX=C2=A0=C2=A0 16.64 GiB =C2=A0256 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A050.2%= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 6c8842dd-a966-467c-a7bc-bd6= 269ce3e7e =C2=A01a
UN =C2=A0XX.XX.XX.XX=C2=A0=C2=A024.39 GiB =C2= =A0256 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A049.8% =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 fd92525d-edf2-4974-8bc5-a350a8831dfa =C2=A01a
= UN =C2=A0XX.XX.XX.XX=C2=A0=C2=A0 23.8 GiB =C2=A0 256 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A048.7% =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bdc597c0-718c-= 4ef6-b3ef-7785110a9923 =C2=A01b

Though = maybe part of what you are experiencing can be cleared up by repair/compact= ion/cleanup. Also, what are your outputs when you call out specific keyspac= es? Do the numbers get more even?

Cheers,

<= div dir=3D"ltr">On Mon, Jun 12, 2017 at 5:22 AM Akhil Mehra <akhilmehra@gmail.com>= wrote:
auto_boots= trap is true by default. Ensure its set to true. On startup look at your lo= gs for your auto_bootstrap value.=C2=A0 Look at the node configuration line= in your log file.

Akhil

On Mon, Jun 12= , 2017 at 6:18 PM, Junaid Nasir <jnasir@an10.io> wrote:
No, I didn't set it (left i= t at default value)
<= div class=3D"gmail_extra">
On Fri, Jun 9, 201= 7 at 3:18 AM, ZAIDI, ASAD A <az192g@att.com> wrote:

Did you make sure auto_bootstrap property is indeed set to [true] when you add= ed the node?

=C2=A0

From: Junaid Nasir [mailto:jnasir@an10.io]
Sent: Monday, June 05, 2017 6:29 AM
To: Akhil Mehra <akhilmehra@gmail.com>
Cc: Vladimir Yudovin <vladyu@winguzone.com>; user@cassandra.apache.org
Subject: Re: Convert single node C* to cluster (rebalancing problem)=

=C2=A0

not evenly, i have setup a new cluster with subset o= f data (around 5gb). using the configuration above I am getting these resul= ts

=C2=A0

Datacenter: datacenter1=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
Status=3DUp/Down
|/ State=3DNormal/Leaving/Joining/Moving
--=C2=A0 Address=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Load =C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0Tokens=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Owns (effec=
tive)=C2=A0 Host ID=C2=A0=C2=A0=C2=A0=C2=A0 Rack
UN=C2=A0 10.128.2.1=C2=A0=C2=A0 4.86 GiB=C2=A0=C2=A0 256=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 44.9%=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 e4427611-c247-42ee-9404-=
371e177f5f17=C2=A0 rack1
UN=C2=A0 10.128.2.10=C2=A0 725.03 MiB=C2=A0 256=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 55.1%=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 690d5620-99d3-4ae3-aebe-8f33af54a08b=C2=
=A0 rack1<=
u>

is there anything else I can tweak/check to make the= distribution even?

=C2=A0

On Sat, Jun 3, 2017 at 3:30 AM, Akhil Mehra <akhilmehra@gmail.com<= /a>> wrote:

So now the data is evenly balanced in both nodes?=C2= =A0

=C2=A0

Refer to the following documentation to get a better= understanding of the roc_address and the broadcast_rpc_address=C2=A0https://www.instaclustr.com/demystifying-cassandras-broadcast_address/. I am surprised that your node started up with=C2=A0rpc_broadca= st_address set=C2=A0as this is an unsupported property. I am assumin= g you are using Cassandra version 3.10.

=C2=A0

=C2=A0

Regards,

Akhil

=C2=A0

On 2/06/2017, at 11:06 PM, Junaid Nasir <jnasir@an10.io> wrote:<= u>

=C2=A0

I am able to get it working. I added a new node with= following changes

#rpc_address:0.0.0.0
rpc_address: 10.128.1.11
#rpc_broadcast_address:10.128.1.11

rpc_address was set to 0.0.0.0, (I ran into a proble= m previously regarding remote connection and made these changes https://stackoverflow.com/questions/12236898/apache-cassandra-rem= ote-access)=C2=A0

=C2=A0

should it be happening?

=C2=A0

On Thu, Jun 1, 2017 at 6:31 PM, Vladimir Yudovin <= ;vladyu@winguzone= .com> wrote:

Did you run "nodetool cleanup" on first n= ode after second was bootstrapped? It should clean rows not belonging to no= de after tokens changed.

=C2=A0

Best regards, Vladimir Yudovin,

Winguzone - Cloud Cassandra Hosting

=C2=A0

=C2=A0

---- On Wed, 31 May 2017 03:55:54 -0400=C2=A0Jun= aid Nasir <jnasir@an= 10.io> wrote ----

=C2=A0

Cassandra ensure that adding or removing nodes are = very easy and that load is balanced between nodes when a change is made. bu= t it's not working in my case.

I have a single node C* deployment (with 270 GB of = data) and want to load balance the data on multiple nodes, I followed this guide=C2=A0

`nodetool status` shows 2 nodes but load is not bal= anced between them

Datacenter: dc1
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Status=3DUp/Down
|/ State=3DNormal/Leaving/Joining/Moving
--=C2=A0 Address=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Load=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 Tokens=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Owns (effecti=
ve)=C2=A0 Host ID=C2=A0=C2=A0=C2=A0 Rack
UN=C2=A0 10.128.0.7=C2=A0=C2=A0 270.75 GiB=C2=A0 256=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 48.6%=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 1a3f6faa-4376-45a8-9c20-11480ae5664c=C2=A0 rack1
UN=C2=A0 10.128.0.14=C2=A0 414.36 KiB=C2=A0 256=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 51.4%=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 66a89fbf-08ba-4b5d-9f10-55d52a199b41=C2=A0 rack1<=
span style=3D"font-family:Consolas;color:#242729">

I also ran 'nodetool repair' on new node bu= t result is same. any pointers would be appreciated :)=

=C2=A0

conf file of new node

cluster_name: 'cluster1'
 - seeds: "10.128.0.7"
num_tokens: 256
endpoint_snitch: GossipingPropertyFileSnitch

Thanks,

Junaid

=C2=A0

=C2=A0

=C2=A0

=C2=A0




--94eb2c13e32ac6b88c0551cac9c0--