cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Durity, Sean R" <>
Subject RE: [EXTERNAL] Re: Adding datacenter and data verification
Date Tue, 18 Sep 2018 13:41:01 GMT
You are correct that altering the keyspace replication settings does not actually move any
data. It only affects new writes or reads. System_auth is one that needs to be repaired quickly
OR, if your number of users/permissions is relatively small, you can just reinsert them after
the alter to the table. The data will get written to all the proper, new nodes.

Sean Durity

From: Pradeep Chhetri <>
Sent: Tuesday, September 18, 2018 1:55 AM
Subject: [EXTERNAL] Re: Adding datacenter and data verification

Hi Eunsu,

By going through the documentation, I think you are right, you shouldn't use withUsedHostsPerRemoteDc
because it will contact nodes in other datacenters.  No i don't use withUsedHostsPerRemoteDc,
but instead i use withLocalDc option.

On Tue, Sep 18, 2018 at 11:02 AM, Eunsu Kim <<>>
Yes, I altered the system_auth key space before adding the data center.

However, I suspect that the new data center did not get the system_auth data and therefore
could not authenticate to the client. Because the new data center did not get the replica
count by altering keyspace.

Do your clients have the 'withUsedHostsPerRemoteDc' option?

On 18 Sep 2018, at 1:17 PM, Pradeep Chhetri <<>>

Hello Eunsu,

I am also using PasswordAuthenticator in my cassandra cluster. I didn't come across this issue
while doing the exercise on preprod.

Are you sure that you changed the configuration of system_auth keyspace before adding the
new datacenter using this:

ALTER KEYSPACE system_auth WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'datacenter1':


On Tue, Sep 18, 2018 at 7:23 AM, Eunsu Kim <<>>

In my case, there were authentication issues when adding data centers.

I was using a PasswordAuthenticator.

As soon as the datacenter was added, the following authentication error log was recorded on
the client log file.

com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host
/ Provided username apm and/or password are incorrect

I was using DCAwareRoundRobinPolicy, but I guess it's probably because of the withUsedHostsPerRemoteDc

I took several steps and the error log disappeared. It is probably ’nodetool rebuild' after
altering the system_auth table.

However, the procedure was not clearly defined.

On 18 Sep 2018, at 2:40 AM, Pradeep Chhetri <<>>

Hello Alain,

Thank you very much for reviewing it. You answer on seed nodes cleared my doubts. I will update
it as per your suggestion.

I have few followup questions on decommissioning of datacenter:

- Do i need to run nodetool repair -full on each of the nodes (old + new dc nodes) before
starting the decommissioning process of old dc.
- We have around 15 apps using cassandra cluster. I want to make sure that all queries before
starting the new datacenter are going with right consistency level i.e LOCAL_QUORUM instead
of QUORUM. Is there a way i can log the consistency level of each query somehow in some log


On Mon, Sep 17, 2018 at 9:26 PM, Alain RODRIGUEZ <<>>
Hello Pradeep,

It looks good to me and it's a cool runbook for you to follow and for others to reuse.

To make sure that cassandra nodes in one datacenter can see the nodes of the other datacenter,
add the seed node of the new datacenter in any of the old datacenter’s nodes and restart
that node.

Nodes seeing each other from the distinct rack is not related to seeds. It's indeed recommended
to use seeds from all the datacenter (a couple or 3). I guess it's to increase availability
on seeds node and/or maybe to make sure local seeds are available.

You can perfectly (and even have to) add your second datacenter nodes using seeds from the
first data center. A bootstrapping node should never be in the list of seeds unless it's the
first node of the cluster. Add nodes, then make them seeds.

Le lun. 17 sept. 2018 à 11:25, Pradeep Chhetri <<>>
a écrit :
Hello everyone,

Can someone please help me in validating the steps i am following to migrate cassandra snitch.


On Wed, Sep 12, 2018 at 1:38 PM, Pradeep Chhetri <<>>

I am running cassandra 3.11.3 5-node cluster on AWS with SimpleSnitch. I was testing the process
to migrate to GPFS using AWS region as the datacenter name and AWS zone as the rack name in
my preprod environment and was able to achieve it.

But before decommissioning the older datacenter, I want to verify that the data in newer dc
is in consistence with the one in older dc. Is there any easy way to do that.

Do you suggest running a full repair before decommissioning the nodes of older datacenter

I am using the steps documented here:<>
I will be very happy if someone can confirm me that i am doing the right steps.



The information in this Internet Email is confidential and may be legally privileged. It is
intended solely for the addressee. Access to this Email by anyone else is unauthorized. If
you are not the intended recipient, any disclosure, copying, distribution or any action taken
or omitted to be taken in reliance on it, is prohibited and may be unlawful. When addressed
to our clients any opinions or advice contained in this Email are subject to the terms and
conditions expressed in any applicable governing The Home Depot terms of business or client
engagement letter. The Home Depot disclaims all responsibility and liability for the accuracy
and content of this attachment and for any damages or losses arising from any inaccuracies,
errors, viruses, e.g., worms, trojan horses, etc., or other items of a destructive nature,
which may be contained in this attachment and shall not be liable for direct, indirect, consequential
or special damages in connection with this e-mail message or its attachment.
View raw message