cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wei Zhu <>
Subject Re: Cassandra pending compaction tasks keeps increasing
Date Fri, 25 Jan 2013 07:18:59 GMT
Thanks Derek,
in the, it says 

# reduce the per-thread stack size to minimize the impact of Thrift                      
    # thread-per-client.  (Best practice is for client connections to                    
    # be pooled anyway.) Only do so on Linux where it is known to be                     
    # supported.                                                                         
    # u34 and greater need 180k                                                          
    JVM_OPTS="$JVM_OPTS -Xss180k"

What value should I use? Java defaults at 400K? Maybe try that first. 


----- Original Message -----
From: "Derek Williams" <>
To:, "Wei Zhu" <>
Sent: Thursday, January 24, 2013 11:06:00 PM
Subject: Re: Cassandra pending compaction tasks keeps increasing

Increasing the stack size in should help you get past the stack overflow.
Doesn't help with your original problem though. 

On Fri, Jan 25, 2013 at 12:00 AM, Wei Zhu < > wrote: 

Well, even after restart, it throws the the same exception. I am basically stuck. Any suggestion
to clear the pending compaction tasks? Below is the end of stack trace: 

at org.apache.cassandra.db.DataTracker.buildIntervalTree( 
at org.apache.cassandra.db.compaction.CompactionController.<init>(

at org.apache.cassandra.db.compaction.CompactionTask.execute( 
at org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(

at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(

at java.util.concurrent.Executors$ Source) 
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
at Source) 
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) 
at java.util.concurrent.ThreadPoolExecutor$ Source) 
at Source) 

Any suggestion is very much appreciated 


----- Original Message ----- 
From: "Wei Zhu" < > 
Sent: Thursday, January 24, 2013 10:55:07 PM 
Subject: Re: Cassandra pending compaction tasks keeps increasing 

Do you mean 90% of the reads should come from 1 SSTable? 

By the way, after I finished the data migrating, I ran nodetool repair -pr on one of the nodes.
Before nodetool repair, all the nodes have the same disk space usage. After I ran the nodetool
repair, the disk space for that node jumped from 135G to 220G, also there are more than 15000
pending compaction tasks. After a while , Cassandra started to throw the exception like below
and stop compacting. I had to restart the node. By the way, we are using 1.1.7. Something
doesn't seem right. 

INFO [CompactionExecutor:108804] 2013-01-24 22:23:10,427 (line 109) Compacting

INFO [CompactionExecutor:108804] 2013-01-24 22:23:11,610 (line 221) Compacted
to [/ssd/cassandra/data/zoosk/friends/zoosk-friends-hf-754996-Data.db,]. 5,259,403 to 5,259,403
(~100% of original) bytes for 1,983 keys at 4.268730MB/s. Time: 1,175ms. 
INFO [CompactionExecutor:108805] 2013-01-24 22:23:11,617 (line 109) Compacting

INFO [CompactionExecutor:108805] 2013-01-24 22:23:12,828 (line 221) Compacted
to [/ssd/cassandra/data/zoosk/friends/zoosk-friends-hf-754997-Data.db,]. 5,272,746 to 5,272,746
(~100% of original) bytes for 1,941 keys at 4.152339MB/s. Time: 1,211ms. 
ERROR [CompactionExecutor:108806] 2013-01-24 22:23:13,048 (line
135) Exception in thread Thread[CompactionExecutor:108806,1,main] 
at java.util.AbstractList$Itr.hasNext(Unknown Source) 

----- Original Message ----- 
From: "aaron morton" < > 
Sent: Wednesday, January 23, 2013 2:40:45 PM 
Subject: Re: Cassandra pending compaction tasks keeps increasing 

The histogram does not look right to me, too many SSTables for an LCS CF. 

It's a symptom no a cause. If LCS is catching up though it should be more like the distribution
in the linked article. 


Aaron Morton 
Freelance Cassandra Developer 
New Zealand 


On 23/01/2013, at 10:57 AM, Jim Cistaro < > wrote: 

What version are you using? Are you seeing any compaction related assertions in the logs?

Might be 

We experienced this problem of the count only decreasing to a certain number and then stopping.
If you are idle, it should go to 0. I have not seen it overestimate for zero, only for non-zero

As for timeouts etc, you will need to look at things like nodetool tpstats to see if you have
pending transactions queueing up. 


From: Wei Zhu < > 
Reply-To: " " < >, Wei Zhu < > 
Date: Tuesday, January 22, 2013 12:56 PM 
To: " " < > 
Subject: Re: Cassandra pending compaction tasks keeps increasing 

Thanks Aaron and Jim for your reply. The data import is done. We have about 135G on each node
and it's about 28K SStables. For normal operation, we only have about 90 writes per seconds,
but when I ran nodetool compationstats, it remains at 9 and hardly changes. I guess it's just
an estimated number. 

When I ran histogram, 

Offset SSTables Write Latency Read Latency Row Size Column Count 
1 2644 0 0 0 18660057 
2 8204 0 0 0 9824270 
3 11198 0 0 0 6968475 
4 4269 6 0 0 5510745 
5 517 29 0 0 4595205 

You can see about half of the reads result in 3 SSTables. Majority of read latency are under
5ms, only a dozen are over 10ms. We haven't fully turn on reads yet, only 60 reads per second.
We see about 20 read timeout during the past 12 hours. Not a single warning from Cassandra

Is it normal for Cassandra to timeout some requests? We set rpc timeout to be 1s, it shouldn't
time out any of them? 


From: aaron morton < > 
Sent: Monday, January 21, 2013 12:21 AM 
Subject: Re: Cassandra pending compaction tasks keeps increasing 

The main guarantee LCS gives you is that most reads will only touch 1 row

If compaction is falling behind this may not hold. 

nodetool cfhistograms tells you how many SSTables were read from for reads. It's a recent
histogram that resets each time you read from it. 

Also, parallel levelled compaction in 1.2


Aaron Morton 
Freelance Cassandra Developer 
New Zealand 


On 20/01/2013, at 7:49 AM, Jim Cistaro < > wrote: 

1) In addition to iostat, dstat is a good tool to see wht kind of disck throuput your are
getting. That would be one thing to monitor. 
2) For LCS, we also see pending compactions skyrocket. During load, LCS will create a lot
of small sstables which will queue up for compaction. 
3) For us the biggest concern is not how high the pending count gets, but how often it gets
back down near zero. If your load is something you can do in segments or pause, then you can
see how fast the cluster recovers on the compactions. 
4) One thing which we tune per cluster is the size of the files. Increasing this from 5MB
can sometimes improve things. But I forget if we have ever changed this after starting data

Is your cluster receiving read traffic during this data migration? If so, I would say that
read latency is your best measure. If the high number of SSTables waiting to compact is not
hurting your reads, then you are probably ok. Since you are on SSD, there is a good chance
the compactions are not hurting you. As for compactionthroughput, we set ours high for SSD.
You usually wont use it all because the compactions are usually single threaded. Dstat will
help you measure this. 

I hope this helps, 

From: Wei Zhu < > 
Reply-To: " " < >, Wei Zhu < > 
Date: Friday, January 18, 2013 12:10 PM 
To: Cassandr usergroup < > 
Subject: Cassandra pending compaction tasks keeps increasing 

When I run nodetool compactionstats 

I see the number of pending tasks keep going up steadily. 

I tried to increase the compactionthroughput, by using 

nodetool setcompactionthroughput 

I even tried the extreme to set it to 0 to disable the throttling. 

I checked iostats and we have SSD for data, the disk util is less than 5% which means it's
not I/O bound, CPU is also less than 10% 

We are using levelcompaction and in the process of migrating data. We have 4500 writes per
second and very few reads. We have about 70G data now and will grow to 150G when the migration
finishes. We only have one CF and right now the number of SSTable is around 15000, write latency
is still under 0.1ms. 

Anything needs to be concerned? Or anything I can do to reduce the number of pending compaction?



Derek Williams 

View raw message