Thanks for the input. I just kicked off another repair for one keyspace. Per the log, there are 1536 ranges to repair through. This makes sense: there are 6 nodes in the cluster, each having 256 token ranges, so 6*256 = 1536. So far, it is averaging 1 range per minute. So repair the keyspace will take more than a day on this rate. I guess the only thing I can do is to upgrade to 2.1 and start using incremental repair?

Thanks.

George.

On Fri, Sep 16, 2016 at 3:03 PM, Dor Laor <dor@scylladb.com> wrote:
On Fri, Sep 16, 2016 at 11:29 AM, Li, Guangxing <guangxing.li@pearson.com> wrote:
Hi,

I have a 3 nodes cluster, each with less than 200 GB data. Currently all nodes have the default 256 value for num_tokens. My colleague told me that with the data size I have (less than 200 GB on each node), I should change num_tokens to something like 32 to get better performance, especially speed up the repair time. Do any of you guys have experience on

It's not enough to know the volume size, it's important to know the amount of keys which effect the merkle tree. I wouldn't change it, I doubt you'll see a significant difference in repair speed and if you'll grow the cluster you would want to have enough vnodes.
 
this? I am running Cassandra Community version 2.0.9. The cluster resides in AWS. All keyspaces have RC 3.

Thanks.

George.