cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Coli <rc...@digg.com>
Subject Re: Extra Large Memtables
Date Mon, 14 Feb 2011 20:54:57 GMT
On Sat, Feb 12, 2011 at 11:17 PM, E S <tr1sklion@yahoo.com> wrote:
> While experimenting with this, I found a bug where you can't have memtable
> throughput configured past 2 gigs without an integer overflow screwing up the
> flushes.  That makes me feel like I'm in uncharted territory :).

I am sure the project would appreciate a JIRA ticket detailing how to
reproduce this behavior, which sounds like a bug.

https://issues.apache.org/jira/browse/CASSANDRA

Regarding very large memtables, it is important to recognize that
throughput refers only to the size of the COLUMN VALUES, and not, for
example, their names. In cases where, for example, an empty column
value is being stored with a UUID column name, your memtable can be
much larger than the value in the cf definition. I have seen 1GB-sized
memtables flush to disk as 1.7GB, and they are likely significantly
larger in memory due to Java object overhead.

=Rob

Mime
View raw message