incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Benjamin Coverston <ben.covers...@datastax.com>
Subject Re: Native heap leaks?
Date Fri, 06 May 2011 00:56:41 GMT
How many column families do you have?

On 5/4/11 12:50 PM, Hannes Schmidt wrote:
> Hi,
>
> We are using Cassandra 0.6.12 in a cluster of 9 nodes. Each node is
> 64-bit, has 4 cores and 4G of RAM and runs on Ubuntu Lucid with the
> stock 2.6.32-31-generic kernel. We use the Sun/Oracle JDK.
>
> Here's the problem: The Cassandra process starts up with 1.1G resident
> memory (according to top) but slowly grows to 2.1G at a rate that
> seems proportional to the write load. No writes, no growth. The node
> is running other memory-sensitive applications (a second JVM for our
> in-house webapp and a short-lived C++ program) so we need to ensure
> that each process stays within certain bounds as far as memory
> requirements go. The nodes OOM and crash when the Cassandra process is
> at 2.1G so I can't say if the growth is bounded or not.
>
> Looking at the /proc/$pid/smaps for the Cassandra process it seems to
> me that it is the native heap of the Cassandra JVM that is leaking. I
> attached a readable version of the smaps file generated by [1].
>
> Some more data: Cassandra runs with default command line arguments,
> which means it gets 1G heap. The JNA jar is present and Cassandra logs
> that the memory locking was successful. In storage-conf.xml,
> DiskAccessMode is mmap_index_only. Other than that and some increased
> timeouts we left the defaults. Swap is completely disabled. I don't
> think this is related but I am mentioning it anyways: overcommit [2]
> is always-on (vm.overcommit_memory=1). Without that we get OOMs when
> our application JVM is fork()'ing and exec()'ing our C++program even
> though there is enough free RAM to satisfy the demands of the C++
> program. We think this is caused by a flawed kernel heuristic that
> assumes that the forked process (our C++ app) is as big as the forking
> one (the 2nd JVM). Anyways, the Cassandra process leaks with both,
> vm.overcommit_memory=0 (the default) and 1.
>
> Whether it is the native heap that leaks or something else, I think
> that 1.1G of additional RAM for 1G of Java heap can't be normal. I'd
> be grateful for any insights or pointers at what to try next.
>
> [1] http://bmaurer.blogspot.com/2006/03/memory-usage-with-smaps.html
> [2] http://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6

-- 
Ben Coverston
DataStax -- The Apache Cassandra Company
http://www.datastax.com/


Mime
View raw message