cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: Best suitable value for flush_largest_memtables_at
Date Thu, 23 Feb 2012 18:59:23 GMT
> flush_largest_memtables_at
Is designed as a safety valve, reducing it may help prevent an oom but it wont get to the
cause. 

Assuming you cannot just allocate more memory to the JVM, and you are running the default
settings in cassandra-env.sh (other than the changes mentioned), and you are on 1.X

I would start with the following in order…

* set a value for memtable_total_space_in_mb in cassandra.yaml
* reduce CF caches
* reduce in_memory_compaction_limit and/or concurrent_compactors
 
Hope that helps. 

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 23/02/2012, at 4:21 PM, Roshan Pradeep wrote:

> Hi Experts
> 
> Under massive write load what would be the best value for Cassandra flush_largest_memtables_at
setting? Yesterday I got an OOM exception in one of our production Cassandra node under heavy
write load within 5 minute duration. 
> 
> I change the above setting value to .45 and also change the -XX:CMSInitiatingOccupancyFraction=45
in cassandra-env.sh file.
> 
> Previously the flush_largest_memtables_at was .75 and commit logs are flush to SSTables
and the size around 40MB. But with the change (reducing it to .45) the flushed SStable size
is 90MB.
> 
> Could someone please explain my configuration change will help under heavy write load?
> 
> Thanks.


Mime
View raw message