cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aiman Parvaiz <>
Subject Re: Need help with incremental repair
Date Sun, 29 Oct 2017 22:12:49 GMT
Thanks Blake and Paulo for the response.

Yes, the idea is to go back to non incremental repairs. I am waiting for all the "anticompaction
after repair" activities to complete and in my understanding( thanks to Blake for the explanation
), I can run a full repair on that KS and then get back to my non incremental repair regiment.

I assume that I should mark the SSTs to un repaired first and then run a full repair?

Also, although I am installing Cassandra from package dsc22 on my CentOS 7 I couldn't find
sstable tools installed, need to figure that out too.

From: Paulo Motta <>
Sent: Sunday, October 29, 2017 1:56:38 PM
Subject: Re: Need help with incremental repair

> Assuming the situation is just "we accidentally ran incremental repair", you shouldn't
have to do anything. It's not going to hurt anything

Once you run incremental repair, your data is permanently marked as
repaired, and is no longer compacted with new non-incrementally
repaired data. This can cause read fragmentation and prevent deleted
data from being purged. If you ever run incremental repair and want to
switch to non-incremental repair, you should manually mark your
repaired SSTables as not-repaired with the sstablerepairedset tool.

2017-10-29 3:05 GMT+11:00 Blake Eggleston <>:
> Hey Aiman,
> Assuming the situation is just "we accidentally ran incremental repair", you
> shouldn't have to do anything. It's not going to hurt anything. Pre-4.0
> incremental repair has some issues that can cause a lot of extra streaming,
> and inconsistencies in some edge cases, but as long as you're running full
> repairs before gc grace expires, everything should be ok.
> Thanks,
> Blake
> On October 28, 2017 at 1:28:42 AM, Aiman Parvaiz (
> wrote:
> Hi everyone,
> We seek your help in a issue we are facing in our 2.2.8 version.
> We have 24 nodes cluster spread over 3 DCs.
> Initially, when the cluster was in a single DC we were using The Last Pickle
> reaper 0.5 to repair it with incremental repair set to false. We added 2
> more DCs. Now the problem is that accidentally on one of the newer DCs we
> ran nodetool repair <keyspace> without realizing that for 2.2 the default
> option is incremental.
> I am not seeing any errors in the logs till now but wanted to know what
> would be the best way to handle this situation. To make things a little more
> complicated, the node on which we triggered this repair is almost out of
> disk and we had to restart C* on it.
> I can see a bunch of "anticompaction after repair" under Opscenter Activites
> across various nodes in the 3 DCs.
> Any help, suggestion would be appreciated.
> Thanks

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message