cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Idrén, Johan <>
Subject RE: memtable mem usage off by 10?
Date Wed, 04 Jun 2014 11:04:01 GMT
Aha, ok. Thanks.

Trying to understand what my cluster is doing:

cassandra.db.memtable_data_size only gets me the actual data but not the memtable heap memory
usage. Is there a way to check for heap memory usage?

I would expect to hit the flush_largest_memtables_at value, and this would be what causes
the memtable flush to sstable then? By default 0.75?

Then I would expect the amount of memory to be used to be maximum ~3x of what I was seeing
when I hadn't set memtable_total_space_in_mb (1/4 by default, max 3/4 before a flush), instead
of close to 10x (250mb vs 2gb).

This is of course assuming that the overhead scales linearly with the amount of data in my
table, we're using one table with three cells in this case. If it hardly increases at all,
then I'll give up I guess :)

At least until 2.1.0 comes out and I can compare.



From: Benedict Elliott Smith <>
Sent: Wednesday, June 4, 2014 12:33 PM
Subject: Re: memtable mem usage off by 10?

These measurements tell you the amount of user data stored in the memtables, not the amount
of heap used to store it, so the same applies.

On 4 June 2014 11:04, Idrén, Johan <<>>

I'm not measuring memtable size by looking at the sstables on disk, no. I'm looking through
the JMX data. So I would believe (or hope) that I'm getting relevant data.

If I have a heap of 10GB and set the memtable usage to 20GB, I would expect to hit other problems,
but I'm not seeing memory usage over 10GB for the heap, and the machine (which has ~30gb of
memory) is showing ~10GB free, with ~12GB used by cassandra, the rest in caches.

Reading 8k rows/s, writing 2k rows/s on a 3 node cluster. So it's not idling.



From: Benedict Elliott Smith <<>>
Sent: Wednesday, June 4, 2014 11:56 AM
Subject: Re: memtable mem usage off by 10?

If you are storing small values in your columns, the object overhead is very substantial.
So what is 400Mb on disk may well be 4Gb in memtables, so if you are measuring the memtable
size by the resulting sstable size, you are not getting an accurate picture. This overhead
has been reduced by about 90% in the upcoming 2.1 release, through tickets 6271<>,
6689<> and 6694<>.

On 4 June 2014 10:49, Idrén, Johan <<>>


I'm seeing some strange behavior of the memtables, both in 1.2.13 and 2.0.7, basically it
looks like it's using 10x less memory than it should based on the documentation and options.

10GB heap for both clusters.

1.2.x should use 1/3 of the heap for memtables, but it uses max ~300mb before flushing

2.0.7, same but 1/4 and ~250mb

In the 2.0.7 cluster I set the memtable_total_space_in_mb to 4096, which then allowed cassandra
to use up to ~400mb for memtables...

I'm now running with 20480 for memtable_total_space_in_mb and cassandra is using ~2GB for

Soo, off by 10 somewhere? Has anyone else seen this? Can't find a JIRA for any bug connected
to this.

java 1.7.0_55, JNA 4.1.0 (for the 2.0 cluster)



View raw message