cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eunsu Kim <eunsu.bil...@gmail.com>
Subject Re: Adding datacenter and data verification
Date Tue, 18 Sep 2018 05:17:41 GMT
Yes, I altered the system_auth key space before adding the data center.

However, I suspect that the new data center did not get the system_auth data and therefore
could not authenticate to the client. Because the new data center did not get the replica
count by altering keyspace.

Do your clients have the 'withUsedHostsPerRemoteDc' option?


> On 18 Sep 2018, at 1:17 PM, Pradeep Chhetri <pradeep@stashaway.com> wrote:
> 
> Hello Eunsu,
> 
> I am also using PasswordAuthenticator in my cassandra cluster. I didn't come across this
issue while doing the exercise on preprod.
> 
> Are you sure that you changed the configuration of system_auth keyspace before adding
the new datacenter using this:
> 
> ALTER KEYSPACE system_auth WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'datacenter1':
'3'};
> 
> Regards,
> Pradeep
> 
> 
> 
> On Tue, Sep 18, 2018 at 7:23 AM, Eunsu Kim <eunsu.bill23@gmail.com <mailto:eunsu.bill23@gmail.com>>
wrote:
> 
> In my case, there were authentication issues when adding data centers.
> 
> I was using a PasswordAuthenticator.
> 
> As soon as the datacenter was added, the following authentication error log was recorded
on the client log file.
> 
> com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on
host /xxx.xxx.xxx.xx:9042: Provided username apm and/or password are incorrect
> 
> I was using DCAwareRoundRobinPolicy, but I guess it's probably because of the withUsedHostsPerRemoteDc
option.
> 
> I took several steps and the error log disappeared. It is probably ’nodetool rebuild'
after altering the system_auth table.
> 
> However, the procedure was not clearly defined.
> 
> 
>> On 18 Sep 2018, at 2:40 AM, Pradeep Chhetri <pradeep@stashaway.com <mailto:pradeep@stashaway.com>>
wrote:
>> 
>> Hello Alain,
>> 
>> Thank you very much for reviewing it. You answer on seed nodes cleared my doubts.
I will update it as per your suggestion.
>> 
>> I have few followup questions on decommissioning of datacenter:
>> 
>> - Do i need to run nodetool repair -full on each of the nodes (old + new dc nodes)
before starting the decommissioning process of old dc.
>> - We have around 15 apps using cassandra cluster. I want to make sure that all queries
before starting the new datacenter are going with right consistency level i.e LOCAL_QUORUM
instead of QUORUM. Is there a way i can log the consistency level of each query somehow in
some log file.
>> 
>> Regards,
>> Pradeep
>> 
>> On Mon, Sep 17, 2018 at 9:26 PM, Alain RODRIGUEZ <arodrime@gmail.com <mailto:arodrime@gmail.com>>
wrote:
>> Hello Pradeep,
>> 
>> It looks good to me and it's a cool runbook for you to follow and for others to reuse.
>> 
>> To make sure that cassandra nodes in one datacenter can see the nodes of the other
datacenter, add the seed node of the new datacenter in any of the old datacenter’s nodes
and restart that node.
>> 
>> Nodes seeing each other from the distinct rack is not related to seeds. It's indeed
recommended to use seeds from all the datacenter (a couple or 3). I guess it's to increase
availability on seeds node and/or maybe to make sure local seeds are available.
>> 
>> You can perfectly (and even have to) add your second datacenter nodes using seeds
from the first data center. A bootstrapping node should never be in the list of seeds unless
it's the first node of the cluster. Add nodes, then make them seeds.
>> 
>> 
>> Le lun. 17 sept. 2018 à 11:25, Pradeep Chhetri <pradeep@stashaway.com <mailto:pradeep@stashaway.com>>
a écrit :
>> Hello everyone,
>> 
>> Can someone please help me in validating the steps i am following to migrate cassandra
snitch.
>> 
>> Regards,
>> Pradeep
>> 
>> On Wed, Sep 12, 2018 at 1:38 PM, Pradeep Chhetri <pradeep@stashaway.com <mailto:pradeep@stashaway.com>>
wrote:
>> Hello
>> 
>> I am running cassandra 3.11.3 5-node cluster on AWS with SimpleSnitch. I was testing
the process to migrate to GPFS using AWS region as the datacenter name and AWS zone as the
rack name in my preprod environment and was able to achieve it. 
>> 
>> But before decommissioning the older datacenter, I want to verify that the data in
newer dc is in consistence with the one in older dc. Is there any easy way to do that. 
>> 
>> Do you suggest running a full repair before decommissioning the nodes of older datacenter
?
>> 
>> I am using the steps documented here: https://medium.com/p/465e9bf28d99 <https://medium.com/p/465e9bf28d99>
I will be very happy if someone can confirm me that i am doing the right steps.
>> 
>> Regards,
>> Pradeep
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 


Mime
View raw message