incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrey Ilinykh <>
Subject Re: Why data tripled in size after repair?
Date Thu, 27 Sep 2012 21:35:16 GMT
On Wed, Sep 26, 2012 at 12:36 PM, Peter Schuller
<> wrote:
>> What is strange every time I run repair data takes almost 3 times more
>> - 270G, then I run compaction and get 100G back.
> outlines the
> maion issues with repair. In short - in your case the limited
> granularity of merkle trees is causing too much data to be streamed
> (effectively duplicate data).
> may be a bandaid
> for you in that it allows granularity to be much finer, and the
> process to be more incremental.
Thank you, Peter!
It looks like what I need. Couple questions.
Does it work with RandomPartinioner only? I use ByteOrderedPartitioner.
I don't see it as part of any release. Am I supposed to build my own
version of cassandra?


View raw message