cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From daemeon reiydelle <daeme...@gmail.com>
Subject Re: Impact on latency with larger memtable
Date Wed, 24 May 2017 16:42:19 GMT
You speak of increase. Please provide your results. Specific examples, Eg
25% increase results in n% increase. Also please include number of nodes,
size of total keyspace, rep factor, etc.

Hopefully this is a 6 node cluster with several hundred gig per keyspace,
not some single node free tier box.

“All men dream, but not equally. Those who dream by night in the dusty
recesses of their minds wake up in the day to find it was vanity, but the
dreamers of the day are dangerous men, for they may act their dreams with
open eyes, to make it possible.” — T.E. Lawrence

sent from my mobile
Daemeon Reiydelle
skype daemeon.c.m.reiydelle
USA 415.501.0198

On May 24, 2017 9:32 AM, "preetika tyagi" <preetikatyagi@gmail.com> wrote:

> Hi,
>
> I'm experimenting with memtable/heap size on my Cassandra server to
> understand how it impacts the latency/throughput for read requests.
>
> I vary heap size (Xms and -Xmx) in jvm.options so memtable will be 1/4 of
> this. When I increase the heap size and hence memtable, I notice the drop
> in throughput and increase in latency. I'm also creating the database such
> that its size doesn't exceed the size of memtable. Therefore, all data
> exist in memtable and I'm not able to reason why bigger size of memtable is
> resulting into higher latency/low throughput.
>
> Since everything is DRAM, shouldn't the throughput/latency remain same in
> all the cases?
>
> Thanks,
> Preetika
>

Mime
View raw message