incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <jbel...@gmail.com>
Subject Re: Schema versions reflect schemas on unwanted nodes
Date Thu, 13 Oct 2011 06:14:59 GMT
Does nodetool removetoken not work?

On Thu, Oct 13, 2011 at 12:59 AM, Eric Czech <eric@nextbigsound.com> wrote:
> Not sure if anyone has seen this before but it's really killing me right
> now.  Perhaps that was too long of a description of the issue so here's a
> more succinct question -- How do I remove nodes associated with a cluster
> that contain no data and have no reason to be associated with the cluster
> whatsoever?
> My last resort here is to stop cassandra (after recording all tokens for
> each node), set the initial token for each node in the cluster in
> cassandra.yaml, manually delete the LocationInfo* sstables in the system
> keyspace, and then restart.  I'm hoping there's a simpler, less seemingly
> risky way to do this so please, please let me know if that's true!
> Thanks again.
> - Eric
> On Tue, Oct 11, 2011 at 11:55 AM, Eric Czech <eric@nextbigsound.com> wrote:
>>
>> Hi, I'm having what I think is a fairly uncommon schema issue --
>> My situation is that I had a cluster with 10 nodes and a consistent
>> schema.  Then, in an experiment to setup a second cluster with the same
>> information (by copying the raw sstables), I left the LocationInfo* sstables
>> in the system keyspace in the new cluster and after starting the second
>> cluster, I realized that the two clusters were discovering each other when
>> they shouldn't have been.  Since then, I changed the cluster name for the
>> second cluster and made sure to delete the LocationInfo* sstables before
>> starting it and the two clusters are now operating independent of one
>> another for the most part.  The only remaining connection between the two
>> seems to be that the first cluster is still maintaining references to nodes
>> in the second cluster in the schema versions despite those nodes not
>> actually being part of the ring.
>> Here's what my "describe cluster" looks like on the original cluster:
>> Cluster Information:
>>    Snitch: org.apache.cassandra.locator.SimpleSnitch
>>    Partitioner: org.apache.cassandra.dht.RandomPartitioner
>>    Schema versions:
>> 48971cb0-e9ff-11e0-0000-eb9eab7d90bf: [<INTENTIONAL_IP1>,
>> <INTENTIONAL_IP2>, ..., <INTENTIONAL_IP10>]
>> 848bcfc0-eddf-11e0-0000-8a3bb58f08ff: [<NOT_INTENTIONAL_IP1>,
>> <NOT_INTENTIONAL_IP2>]
>> The second cluster, however, contains no schema versions involving nodes
>> from the first cluster.
>> My question then is, how can I remove those schema versions from the
>> original cluster that are associated with the unwanted nodes from the second
>> cluster?  Is there any way to remove or evict an IP from a cluster instead
>> of just a token?
>> Thanks in advance!
>> - Eric
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Mime
View raw message