incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Chobot <be...@instructure.com>
Subject Re: removing old nodes
Date Thu, 21 Mar 2013 22:21:49 GMT
Ah, well I'll check back in a week then. But for the record, what I meant was that nodetool
gossipinfo now has entries like:

/10.1.20.201
  STATUS:LEFT,50,1364152145790

Where is shows "50" is where the token used to be, and where it still is on all my live nodes.
So it appears to me as if all my assassinated nodes now have a token of 50. Either way, they
don't seem to be bugging the rest of the cluster anymore, so thanks again.

On Mar 21, 2013, at 3:05 PM, Alain RODRIGUEZ wrote:

> "(And now all sharing token 50? I dunno where that came from.)"
> 
> Not sure about what you mean.
> 
> "nodetool gossipinfo still shows all the old nodes there"
> 
> They must appear with a "left" or "remove" status. Off the top of my head, this information
will remains 7 days. Not sure about it.
> 
> 
> 
> 
> 2013/3/21 Ben Chobot <bench@instructure.com>
> Thanks Alain, this seems to have stopped the log messages, even though nodetool gossipinfo
still shows all the old nodes there. (And now all sharing token 50? I dunno where that came
from.) Will they eventually fall away from the cluster, or are they there for good?
> 
> On Mar 21, 2013, at 11:53 AM, Alain RODRIGUEZ wrote:
> 
>> Using the unsafeAssassinateEndpoint function with old IPs from JMX should do the
trick.
>> 
>> This was already discussed in this mailing list, search using "unsafeAssassinateEndpoint"
as keyword to know all that you need to know about it.
>> 
>> Hope you'll be ok after that.
>> 
>> Alain
>> 
>> 
>> 2013/3/21 Ben Chobot <bench@instructure.com>
>> I've got a 1.1.5 cluster, and a few weeks ago I removed some nodes from it. (I was
trying to upgrade nodes from AWS' large to xlarge, and for some reason that made sense at
the time, it seemed better to double my nodes and then decommission the smaller ones, rather
than to simply rebuild the existing nodes serially.)
>> 
>> Now the remaining nodes are all frequently logging that the old, decommissioned nodes
are dead and that their old token is being removed.... which is great, I guess, but why does
my cluster know about them at all? Doing a nodetool removetoken doesn't work, as the dead
nodes don't display in the ring. Is this expected behavior after a nodetool decommission?
Is maybe something cached that I can safely uncache?
>> 
> 
> 


Mime
View raw message