I think I had (and have) a similar problem:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/OOM-or-what-settings-to-use-on-AWS-large-td6504060.html
My memory usage grew slowly until I ran out of mem and the OS killed my process (due to no swap).

I'm still on 0.7.4, but I'm rolling out 0.8.1 next week, which I was hoping would fix the problem.  I'm using Centos with Sun 1.6.0_24-b07

will

On Thu, Jul 7, 2011 at 7:41 AM, Daniel Doubleday <daniel.doubleday@gmx.net> wrote:
Hm - had to digg deeper and it totally looks like a native mem leak to me:

We are still growing with res += 100MB a day. Cassandra is > 8G now

I checked the cassandra process with pmap -x

Here's the human readable (aggregated) output:

Format is thingy: RSS in KB

Summary:

Total SST: 1961616
Anon RSS: 6499640

Total RSS: 8478376

Here's a little more detail:

SSTables (data and index files)
******
Attic: 0
PrivateChatNotification: 38108
Schema: 0
PrivateChat: 161048
UserData: 116788
HintsColumnFamily: 0
Rooms: 100548
Tracker: 476
Migrations: 0
ObjectRepository: 793680
BlobStore: 350924
Activities: 400044
LocationInfo: 0

Libraries
******
javajar: 2292
nativelib: 13028

Other
******
28201: 32
jna979649866618987247.tmp: 92
locale-archive: 1492
[stack]: 132
java: 44
ffi8TsQPY(deleted): 8

And
******
[anon]: 6499640


Maybe the output of pmap is totally misleading but my interpretation is that only 2GB of RSS is attributed to paged in sstables.
I have one large anon block which looks like this:

Address           Kbytes     RSS   Dirty Mode   Mapping
000000073f600000       0 3093248 3093248 rwx--    [ anon ]

This is the native heap thats been allocated on startup and mlocked

So theres still 3.5GB of anon memory.

We haven't deployed https://issues.apache.org/jira/browse/CASSANDRA-2654 yet and this might be part of it but I don't think thats the main problem.
As I said mem goes up by 100MB each day pretty linearly.

Would be great if anyone could verify this by running pmap or talk my off the roof by explaining that nothing's the way it seems.

All this might be heavily OS specific so maybe that's only on Debian?

Thanks a lot
Daniel 

On Jul 4, 2011, at 2:42 PM, Jonathan Ellis wrote:

mmap'd data will be attributed to res, but the OS can page it out
instead of killing the process.

On Mon, Jul 4, 2011 at 5:52 AM, Daniel Doubleday
<daniel.doubleday@gmx.net> wrote:
Hi all,
we have a mem problem with cassandra. res goes up without bounds (well until
the os kills the process because we dont have swap)
I found a thread that's about the same problem but on OpenJDK:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Very-high-memory-utilization-not-caused-by-mmap-on-sstables-td5840777.html
We are on Debian with Sun JDK.
Resident mem is 7.4G while heap is restricted to 3G.
Anyone else is seeing this with Sun JDK?
Cheers,
Daniel
:/home/dd# java -version
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
:/home/dd# ps aux |grep java
cass     28201  9.5 46.8 372659544 7707172 ?   SLl  May24 5656:21
/usr/bin/java -ea -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42
-Xms3000M -Xmx3000M -Xmn400M ...
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND


28201 cass      20   0  355g 7.4g 1.4g S    8 46.9   5656:25 java






--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com