cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Freeman, Tim" <>
Subject RE: Persistently increasing read latency
Date Thu, 03 Dec 2009 22:34:17 GMT
>Can you tell if the system is i/o or cpu bound during compaction?

It's I/O bound.  It's using ~9% of 1 of 4 cores as I watch it, and all it's doing right now
is compactions.

Tim Freeman
Desk in Palo Alto: (650) 857-2581
Home: (408) 774-1298
Cell: (408) 348-7536 (No reception business hours Monday, Tuesday, and Thursday; call my desk

-----Original Message-----
From: Jonathan Ellis [] 
Sent: Thursday, December 03, 2009 2:19 PM
Subject: Re: Persistently increasing read latency

On Thu, Dec 3, 2009 at 3:59 PM, Freeman, Tim <> wrote:
> I stopped the client at 11:28.  There were 2306 files in data/Keyspace1.  It's now
12:44, and there are 1826 files in data/Keyspace1.  As I wrote this email, the number increased
to 1903, then to 1938 and 2015, even though the server has no clients.  I used jconsole to
invoke a few explicit garbage collections and the number went down to 811.

Sounds normal.

> jconsole reports that the compaction pool has 1670 pending tasks.  As I wrote this email,
the number gradually increased to 1673.  The server has no clients, so this is odd.  The
number of completed tasks in the compaction pool has consistently been going up while the
number of pending tasks stays the same.  The number of completed tasks increased from 130
to 136.

This is because whenever compaction finishes, it adds another
compaction task to see if the newly compacted table is itself large
enough to compact with others.  In a system where compaction has kept
up with demand, these are quickly cleaned out of the queue, but in
your case they are stuck behind all the compactions that are merging

So this is working as designed, but the design is poor because it
causes confusion.  If you can open a ticket for this that would be

> log.2009-12-02-19: WARN [Timer-0] 2009-12-02 19:55:23,305 (line
44) Exception was generated at : 12/02/2009 19:55:22 on thread Timer-0

These have been fixed and are unrelated to compaction.

So, it sounds like things are working, and if you leave it alone for a
while it will finish compacting everything and the queue of compaction
jobs will clear out, and reads should be fast(er) again.

Like I said originally, increaing memtable size / object count will
reduce the number of compactions requred.  That's about all you can do
in 0.5...  Can you tell if the system is i/o or cpu bound during


View raw message