incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Gupta <rgu...@dekaresearch.com>
Subject RE: Host ID collision making node disappear
Date Wed, 13 Aug 2014 14:28:12 GMT
Found the issue and the solution.
Every node has peers column family in system keyspace.
When a VM is copied over and ran as a new node, peers still have the old data (host ids).

Deleting log files and data files do not solve this issue.

There are two solutions to this:


1.       Do not clone existing Cassandra node and use it as additional node. Always start
with a fresh machine which never had any Cassandra installed in it.

OR

2.       Fix the host id in the peers column family in system keyspace. Generate a new UUID
and update the row for newly added peer. This need to be done on all existing Cassandra nodes.

Thanks,
Rahul Gupta
DEKA Research & Development<http://www.dekaresearch.com/>
340 Commercial St  Manchester, NH  03101
P: 603.666.3908 extn. 6504 | C: 603.718.9676

This e-mail and the information, including any attachments, it contains are intended to be
a confidential communication only to the person or entity to whom it is addressed and may
contain information that is privileged. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this communication in error, please
immediately notify the sender and destroy the original message.

From: Jens Rantil [mailto:jens.rantil@tink.se]
Sent: Friday, August 08, 2014 1:09 PM
To: user@cassandra.apache.org
Cc: user@cassandra.apache.org
Subject: Re: Host ID collision making node disappear

Rahul,

I'm pretty sure it's preferable to clean all files and directories in /var/log/cassandra before
starting up the new Cassandra node. This will make it start on a clean slate resetting all
state from previous node.

Cheers,
Jens
—
Sent from Mailbox<https://www.dropbox.com/mailbox>


On Fri, Aug 8, 2014 at 6:21 PM, Rahul Gupta <rgupta@dekaresearch.com<mailto:rgupta@dekaresearch.com>>
wrote:
I have a 3 node Cassandra Cluster. Using DataStax Enetrprise v4.5.1 on VMWare.
I am adding 1 new node to this cluster for running Analytics workload.


So I cloned existing one of the Cassandra VMs, changed the hostname, restarted VMs, then updated
Cassandra.yaml file and restarted Cassandra.
172.17.3.1 – Cassandra Node
172.17.0.173 – Analytics Node, cloned from above node.


Now when this new node join the cluster, the existing node seem to disappear.
I thought it is the issue with tokens, so I moved the new node to a new token, still the same
problem.


In the log files I see:


INFO [HANDSHAKE-/172.17.3.1] 2014-08-08 11:59:18,847 OutboundTcpConnection.java (line 386)
Handshaking version with /172.17.3.1
INFO [GossipStage:1] 2014-08-08 11:59:19,094 Gossiper.java (line 910) Node /172.17.3.1 is
now part of the cluster
WARN [GossipStage:1] 2014-08-08 11:59:19,100 StorageService.java (line 1572) Not updating
host ID 3ce2cc13-7a3c-45cf-9a14-b29b0b7cfb4e for /172.17.3.1 because it's mine


When checked through nodetool on the new node, it shows only three nodes, 172.17.3.1 is not
showing up.


# nodetool ring -h 172.17.0.173
Note: Ownership information does not include topology; for complete information, specify a
keyspace
Datacenter: Analytics
==========
Address       Rack        Status State   Load            Owns                Token
172.17.0.173  rack1       Up     Normal  15.65 GB        33.33%              28356863910078205288614550619314017621
Datacenter: Cassandra
==========
Address       Rack        Status State   Load            Owns                Token
                                                                                         
                                 141784319550391026443072753096570088106
172.17.3.2    rack1       Up     Normal  19.42 GB        33.33%              85070591730234615865843651857942052864
172.17.3.3    rack1       Up     Normal  18.77 GB        33.33%              141784319550391026443072753096570088106




When checked through nodetool on the old node, it shows only three nodes, 172.17.0.173 is
not showing up.


# nodetool ring -h 172.17.3.1
Note: Ownership information does not include topology; for complete information, specify a
keyspace
Datacenter: Cassandra
==========
Address     Rack        Status State   Load            Owns                Token
                                                                           141784319550391026443072753096570088106
172.17.3.1  rack1       Up     Normal  15.69 GB        33.33%              28356863910078205288614550619314017620
172.17.3.2  rack1       Up     Normal  19.43 GB        33.33%              85070591730234615865843651857942052864
172.17.3.3  rack1       Up     Normal  18.77 GB        33.33%              141784319550391026443072753096570088106


Thanks,
Rahul Gupta
DEKA Research & Development<http://www.dekaresearch.com/>
340 Commercial St  Manchester, NH  03101
P: 603.666.3908 extn. 6504 | C: 603.718.9676


This e-mail and the information, including any attachments, it contains are intended to be
a confidential communication only to the person or entity to whom it is addressed and may
contain information that is privileged. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this communication in error, please
immediately notify the sender and destroy the original message.



________________________________
This e-mail and the information, including any attachments it contains, are intended to be
a confidential communication only to the person or entity to whom it is addressed and may
contain information that is privileged. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this communication in error, please
immediately notify the sender and destroy the original message.

Thank you.

Please consider the environment before printing this email.



Click here<https://www.mailcontrol.com/sr/xZ4DBhuv+tHGX2PQPOmvUmkxeMeR4!Fm9zKTya+c+9qIGOBb7ifh2TJCz6jq+qVf5b6TXk5rwVFzr!jqbv!p6A==>
to report this email as spam.

________________________________
This e-mail and the information, including any attachments it contains, are intended to be
a confidential communication only to the person or entity to whom it is addressed and may
contain information that is privileged. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this communication in error, please
immediately notify the sender and destroy the original message.

Thank you.

Please consider the environment before printing this email.
Mime
View raw message