incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Czech <e...@nextbigsound.com>
Subject Re: Schema versions reflect schemas on unwanted nodes
Date Fri, 14 Oct 2011 04:33:39 GMT
Thanks Brandon!  Out of curiosity, would making schema changes through a
thrift interface (via hector) be any different?  In other words, would using
hector instead of the cli make schema changes possible without upgrading?

On Thu, Oct 13, 2011 at 8:22 AM, Brandon Williams <driftx@gmail.com> wrote:

> You're running into https://issues.apache.org/jira/browse/CASSANDRA-3259
>
> Try upgrading and doing a rolling restart.
>
> -Brandon
>
> On Thu, Oct 13, 2011 at 9:11 AM, Eric Czech <eric@nextbigsound.com> wrote:
> > Nope, there was definitely no intersection of the seed nodes between the
> two
> > clusters so I'm fairly certain that the second cluster found out about
> the
> > first through what was in the LocationInfo* system tables.  Also, I don't
> > think that procedure will really help because I don't actually want the
> > schema on cass-analysis-1 to be consistent with the schema in the
> original
> > cluster -- I just want to totally remove it.
> >
> > On Thu, Oct 13, 2011 at 8:01 AM, Mohit Anchlia <mohitanchlia@gmail.com>
> > wrote:
> >>
> >> Do you have same seed node specified in cass-analysis-1 as cass-1,2,3?
> >> I am thinking that changing the seed node in cass-analysis-2 and
> >> following the directions in
> >> http://wiki.apache.org/cassandra/FAQ#schema_disagreement might solve
> >> the problem. Somone please correct me.
> >>
> >> On Thu, Oct 13, 2011 at 12:05 AM, Eric Czech <eric@nextbigsound.com>
> >> wrote:
> >> > I don't think that's what I'm after here since the unwanted nodes were
> >> > originally assimilated into the cluster with the same initial_token
> >> > values
> >> > as other nodes that were already in the cluster (that have, and still
> do
> >> > have, useful data).  I know this is an awkward situation so I'll try
> to
> >> > depict it in a simpler way:
> >> > Let's say I have a simplified version of our production cluster that
> >> > looks
> >> > like this -
> >> > cass-1   token = A
> >> > cass-2   token = B
> >> > cass-3   token = C
> >> > Then I tried to create a second cluster that looks like this -
> >> > cass-analysis-1   token = A  (and contains same data as cass-1)
> >> > cass-analysis-2   token = B  (and contains same data as cass-2)
> >> > cass-analysis-3   token = C  (and contains same data as cass-3)
> >> > But after starting the second cluster, things got crossed up between
> the
> >> > clusters and here's what the original cluster now looks like -
> >> > cass-1   token = A   (has data and schema)
> >> > cass-2   token = B   (has data and schema)
> >> > cass-3   token = C   (had data and schema)
> >> > cass-analysis-1   token = A  (has *no* data and is not part of the
> ring,
> >> > but
> >> > is trying to be included in cluster schema)
> >> > A simplified version of "describe cluster"  for the original cluster
> now
> >> > shows:
> >> > Cluster Information:
> >> >    Schema versions:
> >> > SCHEMA-UUID-1: [cass-1, cass-2, cass-3]
> >> > SCHEMA-UUID-2: [cass-analysis-1]
> >> > But the simplified ring looks like this (has only 3 nodes instead of
> 4):
> >> > Host       Owns     Token
> >> > cass-1     33%       A
> >> > cass-2     33%       B
> >> > cass-3     33%       C
> >> > The original cluster is still working correctly but all live schema
> >> > updates
> >> > are failing because of the inconsistent schema versions introduced by
> >> > the
> >> > unwanted node.
> >> > From my perspective, a simple fix seems to be for cassandra to exclude
> >> > nodes
> >> > that aren't part of the ring from the schema consistency requirements.
> >> >  Any
> >> > reason that wouldn't work?
> >> > And aside from a possible code patch, any recommendations as to how I
> >> > can
> >> > best fix this given the current 8.4 release?
> >> >
> >> > On Thu, Oct 13, 2011 at 12:14 AM, Jonathan Ellis <jbellis@gmail.com>
> >> > wrote:
> >> >>
> >> >> Does nodetool removetoken not work?
> >> >>
> >> >> On Thu, Oct 13, 2011 at 12:59 AM, Eric Czech <eric@nextbigsound.com>
> >> >> wrote:
> >> >> > Not sure if anyone has seen this before but it's really killing
me
> >> >> > right
> >> >> > now.  Perhaps that was too long of a description of the issue
so
> >> >> > here's
> >> >> > a
> >> >> > more succinct question -- How do I remove nodes associated with
a
> >> >> > cluster
> >> >> > that contain no data and have no reason to be associated with
the
> >> >> > cluster
> >> >> > whatsoever?
> >> >> > My last resort here is to stop cassandra (after recording all
> tokens
> >> >> > for
> >> >> > each node), set the initial token for each node in the cluster
in
> >> >> > cassandra.yaml, manually delete the LocationInfo* sstables in
the
> >> >> > system
> >> >> > keyspace, and then restart.  I'm hoping there's a simpler, less
> >> >> > seemingly
> >> >> > risky way to do this so please, please let me know if that's true!
> >> >> > Thanks again.
> >> >> > - Eric
> >> >> > On Tue, Oct 11, 2011 at 11:55 AM, Eric Czech <
> eric@nextbigsound.com>
> >> >> > wrote:
> >> >> >>
> >> >> >> Hi, I'm having what I think is a fairly uncommon schema issue
--
> >> >> >> My situation is that I had a cluster with 10 nodes and a
> consistent
> >> >> >> schema.  Then, in an experiment to setup a second cluster
with the
> >> >> >> same
> >> >> >> information (by copying the raw sstables), I left the
> LocationInfo*
> >> >> >> sstables
> >> >> >> in the system keyspace in the new cluster and after starting
the
> >> >> >> second
> >> >> >> cluster, I realized that the two clusters were discovering
each
> >> >> >> other
> >> >> >> when
> >> >> >> they shouldn't have been.  Since then, I changed the cluster
name
> >> >> >> for
> >> >> >> the
> >> >> >> second cluster and made sure to delete the LocationInfo* sstables
> >> >> >> before
> >> >> >> starting it and the two clusters are now operating independent
of
> >> >> >> one
> >> >> >> another for the most part.  The only remaining connection
between
> >> >> >> the
> >> >> >> two
> >> >> >> seems to be that the first cluster is still maintaining references
> >> >> >> to
> >> >> >> nodes
> >> >> >> in the second cluster in the schema versions despite those
nodes
> not
> >> >> >> actually being part of the ring.
> >> >> >> Here's what my "describe cluster" looks like on the original
> >> >> >> cluster:
> >> >> >> Cluster Information:
> >> >> >>    Snitch: org.apache.cassandra.locator.SimpleSnitch
> >> >> >>    Partitioner: org.apache.cassandra.dht.RandomPartitioner
> >> >> >>    Schema versions:
> >> >> >> 48971cb0-e9ff-11e0-0000-eb9eab7d90bf: [<INTENTIONAL_IP1>,
> >> >> >> <INTENTIONAL_IP2>, ..., <INTENTIONAL_IP10>]
> >> >> >> 848bcfc0-eddf-11e0-0000-8a3bb58f08ff: [<NOT_INTENTIONAL_IP1>,
> >> >> >> <NOT_INTENTIONAL_IP2>]
> >> >> >> The second cluster, however, contains no schema versions involving
> >> >> >> nodes
> >> >> >> from the first cluster.
> >> >> >> My question then is, how can I remove those schema versions
from
> the
> >> >> >> original cluster that are associated with the unwanted nodes
from
> >> >> >> the
> >> >> >> second
> >> >> >> cluster?  Is there any way to remove or evict an IP from a
cluster
> >> >> >> instead
> >> >> >> of just a token?
> >> >> >> Thanks in advance!
> >> >> >> - Eric
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Jonathan Ellis
> >> >> Project Chair, Apache Cassandra
> >> >> co-founder of DataStax, the source for professional Cassandra support
> >> >> http://www.datastax.com
> >> >
> >> >
> >
> >
>

Mime
View raw message