incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Gupta <rgu...@dekaresearch.com>
Subject Host ID collision making node disappear
Date Fri, 08 Aug 2014 16:21:06 GMT
I have a 3 node Cassandra Cluster. Using DataStax Enetrprise v4.5.1 on VMWare.
I am adding 1 new node to this cluster for running Analytics workload.

So I cloned existing one of the Cassandra VMs, changed the hostname, restarted VMs, then updated
Cassandra.yaml file and restarted Cassandra.
172.17.3.1 - Cassandra Node
172.17.0.173 - Analytics Node, cloned from above node.

Now when this new node join the cluster, the existing node seem to disappear.
I thought it is the issue with tokens, so I moved the new node to a new token, still the same
problem.

In the log files I see:

INFO [HANDSHAKE-/172.17.3.1] 2014-08-08 11:59:18,847 OutboundTcpConnection.java (line 386)
Handshaking version with /172.17.3.1
INFO [GossipStage:1] 2014-08-08 11:59:19,094 Gossiper.java (line 910) Node /172.17.3.1 is
now part of the cluster
WARN [GossipStage:1] 2014-08-08 11:59:19,100 StorageService.java (line 1572) Not updating
host ID 3ce2cc13-7a3c-45cf-9a14-b29b0b7cfb4e for /172.17.3.1 because it's mine

When checked through nodetool on the new node, it shows only three nodes, 172.17.3.1 is not
showing up.

# nodetool ring -h 172.17.0.173
Note: Ownership information does not include topology; for complete information, specify a
keyspace
Datacenter: Analytics
==========
Address       Rack        Status State   Load            Owns                Token
172.17.0.173  rack1       Up     Normal  15.65 GB        33.33%              28356863910078205288614550619314017621
Datacenter: Cassandra
==========
Address       Rack        Status State   Load            Owns                Token
                                                                                         
                                 141784319550391026443072753096570088106
172.17.3.2    rack1       Up     Normal  19.42 GB        33.33%              85070591730234615865843651857942052864
172.17.3.3    rack1       Up     Normal  18.77 GB        33.33%              141784319550391026443072753096570088106


When checked through nodetool on the old node, it shows only three nodes, 172.17.0.173 is
not showing up.

# nodetool ring -h 172.17.3.1
Note: Ownership information does not include topology; for complete information, specify a
keyspace
Datacenter: Cassandra
==========
Address     Rack        Status State   Load            Owns                Token
                                                                           141784319550391026443072753096570088106
172.17.3.1  rack1       Up     Normal  15.69 GB        33.33%              28356863910078205288614550619314017620
172.17.3.2  rack1       Up     Normal  19.43 GB        33.33%              85070591730234615865843651857942052864
172.17.3.3  rack1       Up     Normal  18.77 GB        33.33%              141784319550391026443072753096570088106

Thanks,
Rahul Gupta
DEKA Research & Development<http://www.dekaresearch.com/>
340 Commercial St  Manchester, NH  03101
P: 603.666.3908 extn. 6504 | C: 603.718.9676

This e-mail and the information, including any attachments, it contains are intended to be
a confidential communication only to the person or entity to whom it is addressed and may
contain information that is privileged. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this communication in error, please
immediately notify the sender and destroy the original message.


________________________________
This e-mail and the information, including any attachments it contains, are intended to be
a confidential communication only to the person or entity to whom it is addressed and may
contain information that is privileged. If the reader of this message is not the intended
recipient, you are hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this communication in error, please
immediately notify the sender and destroy the original message.

Thank you.

Please consider the environment before printing this email.

Mime
View raw message