cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Williams (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-3243) Node which was decommissioned and shut-down reappears on a single node
Date Fri, 23 Sep 2011 20:03:26 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13113713#comment-13113713
] 

Brandon Williams commented on CASSANDRA-3243:
---------------------------------------------

0919 is missing the LOCATION_KEY (the node's own token) which is odd, because cassandra will
refuse to startup with this table since it should not exist without this key. It does show
itself in the saved endpoints, but no other nodes.

0922 is complete in that it contains LOCATION_KEY and cassandra starts right up with it, and
I can see the removed token in the saved endpoints with an ip address of 10.34.22.201.  However
the strange thing is the timestamp on that column is approximately 2 days _older_ than the
one for the local node itself, which should be impossible.  Is there any chance this node's
clock was way off or changed?

> Node which was decommissioned and shut-down reappears on a single node
> ----------------------------------------------------------------------
>
>                 Key: CASSANDRA-3243
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3243
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 0.8.5
>            Reporter: Jason Harvey
>            Assignee: Brandon Williams
>            Priority: Minor
>         Attachments: locationinfo_0919.tgz, locationinfo_0922.tgz
>
>
> I decommissioned a node several days ago. It was no longer in the ring list on any node
in the ring. However, it was in the dead gossip list.
> In an attempt to clean it out of the dead gossip list so I could truncate, I shut down
the entire ring and bought it back up. Once the ring came back up, one node showed the decommissioned
node as still in the ring in a state of 'Down'. No other node in the ring shows this info.
> I successfully ran removetoken on the node to get that phantom node out. However, it
is back in the dead gossip list, preventing me from truncating.
> Where might the info on this decommissioned node be being stored? Is HH possibly trying
to deliver to the removed node, thus putting it back in the ring on one node?
> I find it extremely curious that none of the other nodes in the ring showed the phantom
node. Shouldn't gossip have propagated the node everywhere, even if it was down?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message