cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From preetika tyagi <>
Subject Re: Impact on latency with larger memtable
Date Thu, 25 May 2017 00:23:31 GMT
Thank you all for the response. I figured out the root cause.
I thought all my data was in memtable only but the data was actually being
dumped to the disk. That's why I was noticing the drop in throughput.

On Wed, May 24, 2017 at 9:42 AM, daemeon reiydelle <>

> You speak of increase. Please provide your results. Specific examples, Eg
> 25% increase results in n% increase. Also please include number of nodes,
> size of total keyspace, rep factor, etc.
> Hopefully this is a 6 node cluster with several hundred gig per keyspace,
> not some single node free tier box.
> “All men dream, but not equally. Those who dream by night in the dusty
> recesses of their minds wake up in the day to find it was vanity, but the
> dreamers of the day are dangerous men, for they may act their dreams with
> open eyes, to make it possible.” — T.E. Lawrence
> sent from my mobile
> Daemeon Reiydelle
> skype daemeon.c.m.reiydelle
> USA 415.501.0198 <(415)%20501-0198>
> On May 24, 2017 9:32 AM, "preetika tyagi" <> wrote:
>> Hi,
>> I'm experimenting with memtable/heap size on my Cassandra server to
>> understand how it impacts the latency/throughput for read requests.
>> I vary heap size (Xms and -Xmx) in jvm.options so memtable will be 1/4 of
>> this. When I increase the heap size and hence memtable, I notice the drop
>> in throughput and increase in latency. I'm also creating the database such
>> that its size doesn't exceed the size of memtable. Therefore, all data
>> exist in memtable and I'm not able to reason why bigger size of memtable is
>> resulting into higher latency/low throughput.
>> Since everything is DRAM, shouldn't the throughput/latency remain same in
>> all the cases?
>> Thanks,
>> Preetika

View raw message