cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pavel Yaskevich (Commented) (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-3804) upgrade problems from 1.0 to trunk
Date Mon, 30 Jan 2012 15:32:10 GMT


Pavel Yaskevich commented on CASSANDRA-3804:

This exception (taken from Sylvain's #2) explains what will happen when you only partially

ERROR [GossipStage:1] 2012-01-30 14:35:13,363 (line 139) Fatal
exception in thread Thread[GossipStage:1,5,main]
java.lang.UnsupportedOperationException: Not a time-based UUID
        at java.util.UUID.timestamp(
        at org.apache.cassandra.service.MigrationManager.updateHighestKnown(
        at org.apache.cassandra.service.MigrationManager.rectify(
        at org.apache.cassandra.service.MigrationManager.onAlive(
        at org.apache.cassandra.gms.Gossiper.markAlive(
        at org.apache.cassandra.gms.Gossiper.handleMajorStateChange(
        at org.apache.cassandra.gms.Gossiper.applyStateLocally(
        at org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
        at java.util.concurrent.ThreadPoolExecutor$

As we switched from Time-based UUID for schema versions MigrationManager on the old nodes
will fail all the time when nodes with new schema start-up or when they will request migrations
from it (because they see that their schema version is different from others). Even if we
make a fix in MigrationManager.rectify(...) method for 1.0.x, nodes with new/old schema will
never come to agreement because of different types of the UUID and because they unable to
run schema mutations anymore.
> upgrade problems from 1.0 to trunk
> ----------------------------------
>                 Key: CASSANDRA-3804
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.1
>         Environment: ubuntu, cluster set up with ccm.
>            Reporter: Tyler Patterson
>            Assignee: Pavel Yaskevich
>             Fix For: 1.1
> A 3-node cluster is on version 0.8.9, 1.0.6, or 1.0.7 and then one and only one node
is taken down, upgraded to trunk, and started again. An rpc timeout exception happens if counter-add
operations are done. It usually takes between 1 and 500 add operations before the failure
occurs. The failure seems to happen sooner if the coordinator node is NOT the one that was
upgraded. Here is the error: 
> {code}
> ======================================================================
> ERROR: counter_upgrade_test.TestCounterUpgrade.counter_upgrade_test
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "/usr/lib/pymodules/python2.7/nose/", line 187, in runTest
>     self.test(*self.arg)
>   File "/home/tahooie/cassandra-dtest/", line 50, in counter_upgrade_test
>     cursor.execute("UPDATE counters SET row = row+1 where key='a'")
>   File "/usr/local/lib/python2.7/dist-packages/cql/", line 96, in execute
>     raise cql.OperationalError("Request did not complete within rpc_timeout.")
> OperationalError: Request did not complete within rpc_timeout.
> {code}
> A script has been added to cassandra-dtest ( to demonstrate the
failure. The newest version of CCM is required to run the test. It is available here if it
hasn't yet been pulled:

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message