incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Reek <ste...@unitedgames.com>
Subject Re: Batch writes getting slow
Date Thu, 06 Oct 2011 15:53:13 GMT
On 10/06/2011 05:26 PM, Jonathan Ellis wrote:
> On Thu, Oct 6, 2011 at 10:09 AM, Stefan Reek<stefan@unitedgames.com>  wrote:
>    
>> I can see that during the times the writing gets slow there are ~3000
>> pending tasks, but they disappear quickly.
>>      
> Your best bet is to make the write load more constant and less bursty.
>   If you really do need to handle bursts like that with low latency,
> then you probably do need more hardware.  (But make sure you've
> covered the basics first, like putting commitlog on a separate
> device.)
>
>    
We really need to make sure that all writes are succesfully written 
before the next batch has to be written,
so the bursts are unavoidable I think.
We do have the commitlogs on separate devices, are there any other 
basics that I could have forgotten, or
any parameters that are important for write performance? As I understand 
it the flush thresholds mainly
influence read performance instead of write performance.

Would it make any difference to write the data with more threads from 
the client, as that's something we can easily tune.

>> I can also see that Cassandra gradually takes more and more memory,
>> eventually filling up the 16GB that is assigned to it, although it
>> doesn't go out of memory.
>> Is this normal behaviour? I expected to see more of a sawtooth...
>>      
> Yes, that is normal from the OS's view of the JVM.  If you want to see
> the sawtooth you'd need to look at the JVM's internal metrics, e.g.,
> with jconsole.
>
>    
I can see the sawtooth in the JVM only for Par Eden and Par Survivor 
space, the CMS Old Gen space just keeps on growing though.


Anyway, thanks for the quick reply.

Regards,

Stefan

Mime
View raw message