cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Czech <e...@nextbigsound.com>
Subject Schema versions reflect schemas on unwanted nodes
Date Tue, 11 Oct 2011 17:55:37 GMT
Hi, I'm having what I think is a fairly uncommon schema issue --

My situation is that I had a cluster with 10 nodes and a consistent schema.
 Then, in an experiment to setup a second cluster with the same information
(by copying the raw sstables), I left the LocationInfo* sstables in the
system keyspace in the new cluster and after starting the second cluster, I
realized that the two clusters were discovering each other when they
shouldn't have been.  Since then, I changed the cluster name for the second
cluster and made sure to delete the LocationInfo* sstables before starting
it and the two clusters are now operating independent of one another for the
most part.  The only remaining connection between the two seems to be that
the first cluster is still maintaining references to nodes in the second
cluster in the schema versions despite those nodes not actually being part
of the ring.

Here's what my "describe cluster" looks like on the original cluster:

Cluster Information:
   Snitch: org.apache.cassandra.locator.SimpleSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions:
48971cb0-e9ff-11e0-0000-eb9eab7d90bf: [<INTENTIONAL_IP1>, <INTENTIONAL_IP2>,
..., <INTENTIONAL_IP10>]
 848bcfc0-eddf-11e0-0000-8a3bb58f08ff: [<NOT_INTENTIONAL_IP1>,
<NOT_INTENTIONAL_IP2>]

The second cluster, however, contains no schema versions involving nodes
from the first cluster.

My question then is, how can I remove those schema versions from the
original cluster that are associated with the unwanted nodes from the second
cluster?  Is there any way to remove or evict an IP from a cluster instead
of just a token?

Thanks in advance!

- Eric

Mime
View raw message