cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonas Borgström <>
Subject Re: AW: Strange nodetool repair behaviour
Date Tue, 05 Apr 2011 14:25:22 GMT
On 04/05/2011 03:49 PM, Jonathan Ellis wrote:
> Sounds like

Yes, that sounds like the issue I'm having. Any chance for a fix for
this being backported to 0.7.x?

Anyway, I guess I might as well share the test case I've used to
reproduce this problem:

Cluster configuration: 6 nodes running 0.7.4 with RF=3

1. Create keyspace and column families (see (attached))

2. Insert 20 100MB keys into each of column family A, B and C:

$ python

This results in 2.4GB worth of sstables on node1:

$ du -sh /data/cassandra/data/repair_test3/
2.4G	/data/cassandra/data/repair_test3/

3. Run repair:

$ time nodetool -h node1 repair repair_test3
real	3m28.218s

The repair logged about streaming of 1 to 3 ranges for each column
family and the sstable directory was filled with a bunch of
"<column-family>-tmp-" files and disk usage peaked at 10+GB

The repair completed successfully and the disk usage is down to 6.4GB:

$ du -sh /data/cassandra/data/repair_test3/
6.4G	/data/cassandra/data/repair_test3/

4. Run repair again:

$ time nodetool -h node1 repair repair_test3
real	9m23.514s

This time the disk usage peaked at 25+GB and then settled at 4.7GB. This
time repair reported that even more ranges were out of sync.

So this issue seems to cause repair to take a very long time,
unnecessarily sending a lot of data over the network and leave a lot of
"air" in the resulting sstables that can only be recovered by triggering
major compactions.

(A GC was triggered before all disk usage measurements)


View raw message