That sounds a little odd, may help if you post the yaml settings, the 
tpstats and the log lines that look something like this...

INFO [ROW-MUTATION-STAGE:5] 2010-09-04 15:43:49,402 (line 790) Enqueuing flush of 
Memtable-Super1@1754565178(80208 bytes, 2304 operations)

Also wondering why you chose such high memtable thresholds and how 
that's working. Do you expect to trigger the operation count or the
throughput count in normal processing? Have you seen the additional 
guidance on mem table tuning here


On 7 Sep 2010, at 18:33, Mubarak Seyed wrote:

I have a 8 nodes cluster, MemtableThreshold is 2 GB/CF, MemtableObjectsCount is 1.2, heap (min/max) is 30 GB and only 4 ColumnFamilies

It appears from system.log that flush happens for every < 50 operations (read or write) and compaction is happening very frequently and i could see lots of sstable is getting created (with smaller size). For a just 1000 inserts, i could see around 20 sstables.

When i change the MemtableThreshold to 1 GB/CF, everything works as desired.

Any idea, what could be the problem when i specify MemtableThreshold to 2 GB/CF even though i specified the large heap?

Mubarak Seyed.