cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeremiah Jordan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
Date Wed, 26 Jun 2013 09:48:20 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13693876#comment-13693876
] 

Jeremiah Jordan commented on CASSANDRA-5661:
--------------------------------------------

[~xedin] LCS == LeveledCompactionStrategy.  While in theory I like the idea of the searchers
in most cases expiring quicker with timers, in practice since these are on the heap, and LCS
defaults to only 5 MB files per levels, you can have A LOT of sstables, 200 per GB...  Which
as [~jbellis] mentioned makes # of sstables * concurrent readers a big problem, which was
the problem hit above.  So we really need to bound memory usage with a hard cap, not the fuzzy
cap of "how many can I open in X seconds".

1k reqs/sec hitting a 4 level deep LCS CF could mean 4k Reader's (~500 MB) created per second.
 Now that I say that out loud, I almost think we should go back to not caching these at all
so they always just get recycled in young gen and never have a chance to hit old gen.  I guess
it could be a config parameter which defaults on for STCS and off for LCS.  Since STCS work
loads have no where near as many sstables.
                
> Discard pooled readers for cold data
> ------------------------------------
>
>                 Key: CASSANDRA-5661
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.2.1
>            Reporter: Jonathan Ellis
>            Assignee: Pavel Yaskevich
>             Fix For: 1.2.7
>
>
> Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never
cleaned up until the SSTableReader is closed.  So memory use is "the worst case simultaneous
RAR we had open for this file, forever."
> We should introduce a global limit on how much memory to use for RAR, and evict old ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message