And about the production 7Gb or RAM is sufficient ? Or 11 Gb is the minimum ?
Thank you for your inputs for the JVM I'll try to tune that


2011/4/4 Peter Schuller <peter.schuller@infidyne.com>
> You can change VM settings and tweak things like memtable thresholds
> and in-memory compaction limits to get it down and get away with a
> smaller heap size, but honestly I don't recommend doing so unless
> you're willing to spend some time getting that right and probably
> repeating some of the work in the future with future versions of
> Cassandra.

That said, if you do want to do so to give it a try, I suggest (1)
changing cassandra-env to remove all the GC stuff:

VM_OPTS="$JVM_OPTS -XX:+UseParNewGC"
JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC"
JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled"
JVM_OPTS="$JVM_OPTS -XX:SurvivorRatio=8"
JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=1"
JVM_OPTS="$JVM_OPTS -XX:CMSInitiatingOccupancyFraction=75"
JVM_OPTS="$JVM_OPTS -XX:+UseCMSInitiatingOccupancyOnly"

And then setting a fixed heap size, and removing the manual fixation of new gen:

JVM_OPTS="$JVM_OPTS -Xmn${HEAP_NEWSIZE}"

Then maybe remove the initial heap size enforcement, but that might
not help depending:

JVM_OPTS="$JVM_OPTS -Xms${MAX_HEAP_SIZE}"

And then go through cassandra.yaml and tune down all the various
limitations. Less concurrent readers/writers, all the *_mb_* settings
way down, and the RPC framing limitations.

But let me re-iterate: I don't recommend running in any such
configuration in production. But if you just want it running for
testing/for just being available, with no special requirements, and
not in production, the above might work. I haven't really tested it
myself; there may be gotchas involved.

--
/ Peter Schuller