cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tomas Salfischberger (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-6191) Memory exhaustion with large number of compressed SSTables
Date Sun, 13 Oct 2013 18:45:41 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13793751#comment-13793751
] 

Tomas Salfischberger commented on CASSANDRA-6191:
-------------------------------------------------

bq. Did you mean to link something else?

Oops, I meant: CASSANDRA-6092

bq. I'm afraid not; it's a bit involved (and in fact caused a regression in CASSANDRA-6149),
so we're being cautious with 1.2.

How about a marker interface (something like RecycleAwareRandomAccessReader) on CompressedRandomAccessReader
which is checked in PoolingSegmentedFile.recycle() where we call RecycleAwareRandomAccessReader.recycle()
so we can set the reference to the ByteBuffer to null. Then add a simple check in CompressedRandomAccessReader.decompressChunk()
and re-allocate the buffer if necessary?

Or will this cause too much unreference and re-allocate traffic on the ByteBuffer during startup?
(Not sure how the re-use flow from the pool in PoolingSegmentedFile goes)

> Memory exhaustion with large number of compressed SSTables
> ----------------------------------------------------------
>
>                 Key: CASSANDRA-6191
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6191
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: OS: Debian 7.1
> Java: Oracle 1.7.0_25
> Cassandra: 1.2.10
> Memory: 24GB
> Heap: 8GB
>            Reporter: Tomas Salfischberger
>
> Not sure "bug" is the right description, because I can't say for sure that the large
number of SSTables is the cause of the memory issues. I'll share my research so far:
> Under high read-load with a very large number of compressed SSTables (caused by the initial
default 5mb sstable_size in LCS) it seems memory is exhausted, without any room for GC to
fix this. It tries to GC but doesn't reclaim much.
> The node first hits the "emergency valves" flushing all memtables, then reducing caches.
And finally logs 0.99+ heap usages and hangs with GC failure or crashes with OutOfMemoryError.
> I've taken a heapdump and started analysis to find out what's wrong. The memory seems
to be used by the byte[] backing the HeapByteBuffer in the "compressed" field of org.apache.cassandra.io.compress.CompressedRandomAccessReader.
The byte[] are generally 65536 byes in size, matching the block-size of the compression.
> Looking further in the heap-dump I can see that these readers are part of the pool in
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile. Which is linked to the "dfile"
field of org.apache.cassandra.io.sstable.SSTableReader. The dump-file lists 45248 instances
of CompressedRandomAccessReader.
> Is this intended to go this way? Is there a leak somewhere? Or should there be an alternative
strategy and/or warning for cases where a node is trying to read far too many SSTables?
> EDIT:
> Searching through the code I found that PoolingSegmentedFile keeps a pool of RandomAccessReader
for re-use. While the CompressedRandomAccessReader allocates a ByteBuffer in it's constructor
and (to make things worse) enlarges it if it's reasing a large chunk. This (sometimes enlarged)
ByteBuffer is then kept alive because it becomes part of the CompressedRandomAccessReader
which is in turn kept alive as part of the pool in the PoolingSegmentedFile.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message