cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Coli <>
Subject Re: Upgrade Limitations Question
Date Wed, 23 Sep 2015 22:54:01 GMT
On Wed, Sep 16, 2015 at 7:02 AM, Vasileios Vlachos <> wrote:

> At the end we had to wait for the upgradesstables ti finish on every node.
> Just to eliminate the possibility of this being the reason of any weird
> behaviour after the upgrade. However, this process might take a long time
> in a cluster with a large number of nodes which means no new work can be
> done for that period.

Yes, this is the worst case scenario and it's pretty bad for large clusters
/ large data-size per node.

1) TRUNCATE requires all known nodes to be available to succeed, if you are
>> restarting one, it won't be available.
> I suppose all means all, not all replicas here, is that right? Not
> directly related to the original question, but that might explain why we
> end up with peculiar behaviour some times when we run TRUNCATE. We've now
> taken the approach DROP it and do it again when possible (even though this
> is still problematic when using the same CF name)

Pretty sure that TRUNCATE and DROP have the same behavior wrt node
availability. Yes, I mean all nodes which are supposed to replicate that

> Is there a way to find out if the upgradesstables has been run against a
>> particular node or not?
If you run it and it immediately completes [1], it has probably been run

[1] - 1.2.4 - "NOOP on
upgradesstables for already upgraded node"

View raw message