we are running 2.1.18 with vnodes in production and due to (https://issues.apache.org/jira/browse/CASSANDRA-11155) we can’t run cleanup e.g. after extending the cluster without blocking our hourly snapshots.
What options do we have to get rid of partitions a node does not own anymore?
· Using a version which has this issue fixed, although upgrading to 2.2+, due to various issues, is not an option at the moment
· Temporarily disabling the hourly cron job before starting cleanup and re-enable after cleanup has finished
· Any other way to re-write SSTables with data a node owns after a cluster scale out