cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Sanciangco <jsancian...@blizzard.com.INVALID>
Subject Re: Changing existing Cassandra cluster from single rack configuration to multi racks configuration
Date Tue, 12 Mar 2019 20:07:28 GMT
Maybe this was a specific issue to my topology in the past where I had 9 nodes with a 3 rack
implementation. Each rack contained a unique replica set so when a node went down it put very
high load on the nodes in the same rack. How does the data get distributed in this case where
there are only 2 nodes in each of the 3 racks?

- Justin Sanciangco


From: Alexander Dejanovski <alex@thelastpickle.com>
Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Date: Tuesday, March 12, 2019 at 10:56 AM
To: user <user@cassandra.apache.org>
Subject: Re: Changing existing Cassandra cluster from single rack configuration to multi racks
configuration

Hi Justin,

I'm not sure I follow your reasoning. In a 6 node cluster with 3 racks (2 nodes per rack)
and RF 3, if a node goes down you'll still have one node in each of the other racks to serve
the requests. Nodes within the same racks aren't replicas for the same tokens (as long as
the number of racks is greater or equal to the RF).

Regarding the other question with the decommission/rebootstrap procedure, unbalances are indeed
to be expected, and I'd favor the DC switch technique, but it may not be an option.

Cheers,

Le mar. 12 mars 2019 à 18:28, Justin Sanciangco <jsanciangco@blizzard.com.invalid>
a écrit :
I would recommend that you do not go into a 3 rack single dc implementation with only 6 nodes.
If a node goes down in this situation, the node that is paired with the node that is downed
will have to service all of the load instead of being evenly distributed throughout the cluster.
While its conceptually nice to have 3 rack implementation, it does have some negative implications
when not at a proper node count.

What features are you trying to make use of with going with multirack?

- Justin Sanciangco


From: Laxmikant Upadhyay <laxmikant.hcl@gmail.com<mailto:laxmikant.hcl@gmail.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Date: Monday, March 11, 2019 at 10:52 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: Re: Changing existing Cassandra cluster from single rack configuration to multi racks
configuration

Hi Alex,

Regarding your below point the admin need to take care of temporary uneven distribution of
data util the entire process is done:

"If you can't, then I guess you can for each node (one at a time), decommission it, wipe it
clean and re-bootstrap it after setting the appropriate rack."

I believe while doing so in the existing single rack cluster, the first new node joined with
different rack (rac2) will get 100% loaded in terms so disk usage will be proportionally very
high in comparison to other nodes in rac1.
So until both racks have equal number of nodes and we run nodetool cleaup, the data will not
be equally distributed.


On Wed, Mar 6, 2019 at 5:50 PM Alexander Dejanovski <alex@thelastpickle.com<mailto:alex@thelastpickle.com>>
wrote:
Hi Manish,

the best way, if you have the opportunity to easily add new hardware/instances, is to create
a new DC with racks and switch traffic to the new DC when it's ready (then remove the old
one). My co-worker Alain just wrote a very handy blog post on that technique : http://thelastpickle.com/blog/2019/02/26/data-center-switch.html

If you can't, then I guess you can for each node (one at a time), decommission it, wipe it
clean and re-bootstrap it after setting the appropriate rack.
Also, take into account that your keyspaces must use the NetworkTopologyStrategy so that racks
can be taken into account. Change the strategy prior to adding the new nodes if you're currently
using SimpleStrategy.

You cannot (and shouldn't) try to change the rack on an existing node (the GossipingPropertyFileSnitch
won't allow it).

Cheers,

On Wed, Mar 6, 2019 at 12:15 PM manish khandelwal <manishkhandelwal03@gmail.com<mailto:manishkhandelwal03@gmail.com>>
wrote:
We have a 6 node Cassandra cluster in which all the nodes  are in same rack in a dc. We want
to take advantage of "multi rack" cluster (example: parallel upgrade on all the nodes in same
rack without downtime). I would like to know what is the recommended process to change an
existing cluster with single racks configuration to multi rack configuration.

I want to introduce 3 racks with 2 nodes in each rack.

Regards
Manish

--
-----------------
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com<http://www.thelastpickle.com/>


--

regards,
Laxmikant Upadhyay

Mime
View raw message