incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Reek <>
Subject Re: Batch writes getting slow
Date Fri, 07 Oct 2011 08:47:54 GMT
Is it actually filling up enough to trigger an old-gen CMS gc?

Yes, it fills up to the 16G and then it starts doing the CMS gc's which 
dramatically decreases the performance.
I'm still not sure why it does this, as a nodetool info states the load 
as less than 4G.
Any ideas?

On 10/06/2011 06:15 PM, Jonathan Ellis wrote:
> On Thu, Oct 6, 2011 at 10:53 AM, Stefan Reek<>  wrote:
>> We do have the commitlogs on separate devices, are there any other basics
>> that I could have forgotten, or
>> any parameters that are important for write performance?
> 1.0 write performance is something like 30% better...  I don't think
> there's anything else you'll find for "free."
>> As I understand it
>> the flush thresholds mainly
>> influence read performance instead of write performance.
> It can affect write performance if you're flushing really small
> sstables, but I doubt that's the problem here.
>> Would it make any difference to write the data with more threads from the
>> client, as that's something we can easily tune.
> Not in this case because Cassandra turns the batch into single-row
> writes internally, so it gets parallelized that way.
> If you can avoid waiting for One Big Batch and stream changes in as
> they happen, that would help.
>> I can see the sawtooth in the JVM only for Par Eden and Par Survivor space,
>> the CMS Old Gen space just keeps on growing though.
> Is it actually filling up enough to trigger an old-gen CMS gc?

View raw message