cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Julius ┼Żaromskis (JIRA) <>
Subject [jira] [Commented] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms
Date Thu, 11 May 2017 15:13:04 GMT


Julius ┼Żaromskis commented on CASSANDRA-13441:

Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 3.0.13 and
doing sstableupgrade. Noticed weird disk write patterns and started seeing migration tasks
bouncing around. I've only managed to update first of the 3 nodes. Migrations tasks have stopped
after I've rebooted first node.

Cluster Information:
        Name: cluster
        Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
        Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
        Schema versions:
                600b7268-d42a-3b72-8706-093b6c8cfaff: []
                77a40699-8e9e-35aa-834e-68c32e40a45a: [,]

Datacenter: dc1
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns (effective)  Host ID                        
UN  284.95 GB  256          63.4%             d0d83d9d-0dec-45cd-9ca9-93515fa131f3
UN  288.53 GB  256          64.1%             6d9709a0-0e10-46a1-9afa-d106b74ca9e0
UN  326.31 GB  256          72.5%             5c969700-8bd9-49a4-9772-1284439f8364

The schema version of first node would not propagate to other nodes. I'm afraid further upgrades
might create new schema versions? I can't afford to lose any data. Any advise?

> Schema version changes for each upgraded node in a rolling upgrade, causing migration
> --------------------------------------------------------------------------------------------
>                 Key: CASSANDRA-13441
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Schema
>            Reporter: Jeff Jirsa
>            Assignee: Jeff Jirsa
>             Fix For: 3.0.14, 3.11.0, 4.0
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node to
upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, and subsequently
upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll write the
same tables that exist in the schema with brand new timestamps. As written, this will cause
all nodes in the cluster to change schema (to the version with the newest timestamp). On a
sufficiently large cluster with a non-trivial schema, this could cause (literally) millions
of migration tasks to needlessly bounce across the cluster.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message