cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mingfan Lu <>
Subject bad behavior of my Cassandra cluster
Date Wed, 04 Aug 2010 02:46:58 GMT
  I have a 4-node cassandra cluster. And I find when the 4 nodes are
flushing memtable and gc at the very similar moment, the throughput
will drop and latency will increase rapidly and the nodes are dead and
up frequently ....
 You could download the IOPS variance of data disk (sda here) and
system logs of these nodes from
  (if you can't download it, just tell me.)
  What happed to the cluster?
  How could I avoid such scenario?
 *  Storage configuration
    All of nodes act as seed node
    Random partitioner is used, so that the data is evenly located in
the 4 nodes
    memtable thresholds:
        DiskAccessMode?: auto (in fact is mmap)
        MemtableThroughputInMB: 1024
        MemtableOperationsInMillions?: 7
        MemtableFlushAfterMinutes?: 1440
    DiskAccess mode: Auto (mmap in fact)
 *  While JVM options are:
   JVM_OPTS="-ea \
             -Xms8G \
             -Xmx8G \
             -XX:+UseParNewGC \
             -XX:+UseConcMarkSweepGC \
             -XX:+CMSParallelRemarkEnabled \
             -XX:SurvivorRatio=8 \
             -XX:+UseLargePages \
             -XX:LargePageSizeInBytes=2m \
             -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+PrintHeapAtGC -Xloggc:/tmp/cloudstress/jvm.gc.log \
             -XX:MaxTenuringThreshold=1 \
             -XX:+HeapDumpOnOutOfMemoryError \

View raw message