cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pavel Yaskevich (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
Date Sat, 13 Jul 2013 08:09:49 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13707694#comment-13707694
] 

Pavel Yaskevich commented on CASSANDRA-5661:
--------------------------------------------

I understand what is ideal use-case for LTQ is, I wanted to try it out since it was mentioned
couple of times to have better results than CLQ under load.

I expected FileCacheService to be faster, I was just trying to check how much latency it actually
adds even on such synthetic scenario as stress with no writes nor compaction. I strongly think
(and I explained why multiple times) that we can't allow any degradation more than 0.5 ms
in percentile on such critical path and why expiring items in bulk is okey for us since, in
steady state, eviction would be driven by compactions cleaning all of open file descriptors
per compacted sstable, where timed expiry would be very infrequent as read pattern doesn't
change frequently in production systems.
                
> Discard pooled readers for cold data
> ------------------------------------
>
>                 Key: CASSANDRA-5661
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.2.1
>            Reporter: Jonathan Ellis
>            Assignee: Pavel Yaskevich
>             Fix For: 2.0
>
>         Attachments: CASSANDRA-5661-multiway-per-sstable.patch, CASSANDRA-5661.patch,
CASSANDRA-5661-v2-global-multiway-per-sstable.patch, DominatorTree.png, Histogram.png
>
>
> Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never
cleaned up until the SSTableReader is closed.  So memory use is "the worst case simultaneous
RAR we had open for this file, forever."
> We should introduce a global limit on how much memory to use for RAR, and evict old ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message