incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Derek Bromenshenkel <derek.bromenshen...@gmail.com>
Subject How to determine compaction bottlenecks
Date Tue, 27 Nov 2012 15:23:29 GMT
Setup: C* 1.1.6, 6 node (Linux, 64GB RAM, 16 Core CPU, 2x512 SSD), RF=3, 1.65TB 
total used
Background: Client app is off - no reads/writes happening. Doing some cluster 
maintenance requiring node repairs and upgradesstables.

I've been playing around with trying to figure out what is making compactions 
run so slow.  Watching syslogs, it seems to average 3-4MB/s.  That just seems so 
slow for this set up and the fact there is zero external load on the cluster.  
As far as I can tell:
1. Not I/O bound according to iostat data 
2. CPU seems to be idiling also
3. From my understanding, I am using all the correct compaction settings for 
this setup: Here are those below:

snapshot_before_compaction: false
in_memory_compaction_limit_in_mb: 256
multithreaded_compaction: true
compaction_throughput_mb_per_sec: 128
compaction_preheat_key_cache: true

Some other thoughts:
- I have turned on DEBUG logging for the Throttle class and played with the live 
compaction_throughput_mb_per_sec setting.  I can see it performing the 
throttling if I set the value low (say 4), but anything over 8 it is apparently 
running wide open. [Side note: Although the math for the Throttle class adds up, 
over all the throttling seems to be very very conservative.]
- I accidently turned on DEBUG for the entire ...compaction.* package and that 
unintentionally created A LOT of I/O from the ParallelCompactionIterable class, 
and the disk/OS handled that just fine.

Perhaps I just don't fully grasp what is going on or have the correct 
expectations.  I am OK with things being slow if the hardware is working hard, 
but that does not seem to be the case.

Anyone have some insight?

Thanks


Mime
View raw message